As per the title.
Wrong.The reason i asked is i said this on another forum.
The reason it takes longer to charge to 100% is for the last 20% the BMS (BATTERY MANAGEMENT SYSTEM) will slow the charging rate down, 1 to let the battery cool, 2 to balance the battery cells so they are all charged to the same level, an EV should be charged to 100% at least once a month just for balancing. No need to charge 100% every time, because I only charge once a week I charge to 100% but if I needed to charge every day I would only charge to 80%.
I have only used a public charger once and charged at 90 kWh for 20 mins just to make sure everything worked.
The Corsa EV will charge at a rate up to 100 kWh and it's the car that has the charger built-in, not the charge point that's just a power supply.
Sorry that's rubbish.and this is a reply that i got
Sorry to disagree. It is not the BMS that limits the charge rate. It is the nature of charging in that it is CCCV.
Constant Current Constant Voltage.
A battery can be charged at up to a specific current maximum. It can also only accept a charge voltage up to it's maximum charge voltage.
This explanation is not just aimed at you, but for anyone interested in what I mean by CCCV.
Lets use an imaginary battery so that the maths is easy.
The battery is flat at 2V and full at 4V and it is a 1AH battery.
The charge voltage is directly related to the current. So if you increase the charge voltage you increase the charge current proportionately.
Let's say the internal resistance of battery is 1Ω and the max charge rate of the battery is 1C So the maximum charge rate would be 1AMP.
If the battery is flat and showing 1V, we cannot charge at the full 4V maximum charge voltage because 4v-2v = 2v (difference) and 2v / 1Ω = 2Amp which is double the maximum charge current.
So you set the charge voltage at it's maximum which is 1Ω x 1A = 1V above battery voltage.
At the start when the battery is empty at 2V the charge voltage would start at 3V. As the battery charges it would increase to stay 1V ahead of the current battery voltage as it charges.
The charge voltage would stay 1V ahead of the battery voltage until it reaches 4V at which point it couldn't get any higher.
That is the constant current part of the charge cycle. The fast bit and usually around 80%.
We then move to the constant voltage part where the charge voltage is capped. The charge voltage cannot go any higher than 4V so as the batteries voltage continues to rise the difference between them falls and the charge current falls proportionally.
Here is a quick rough and ready table. The 1st two show constant current and the rest show the slow fall in charge current when it enters constant voltage mode.
![]()
The BMS will kick in when one cell reaches the maximum voltage. It will apply a resistance between the cathode and anode of that cell to drain it a little while the rest catch up. rinse and repeat.
Helpful, but with respect, we still don't know which strategy has been adopted by which manufacturers. (Exc Tesla)As someone that's been building battery packs for years, here's a brief explanation of what happens in the real world, as measured by yours truly. When you start to charge a series connected pack of cells (or paralled cell groups, they behave just like single cells) then current flows through the whole pack as it charges. Lithium chemistry cells have a pretty flat voltage right up until they reach almost full capacity, when the terminal voltage rapidly rises. For LiFePO4 cells that voltage is typically around 3.6 V to 3.65 V or so, for most other Lithium ion chemistries that voltage is around 4.15 V to 4.2 V. If the limiting terminal voltage is exceeded, there is a risk of cell damage and overheating. The margin is tight, just a 100mV over the maximum allowable cell terminal voltage during a high current charge may cause a cell to seriously overheat and will likely cause cell damage.
The problem is this. Manufacturers try to match the capacity of cells in packs, so they don't vary much, but there is always some small variation. This means there will always be one cell (for cell read cell group from now on) that reaches cut-off voltage before all the others. A consequence of this is that this cell has to trigger the charge to slow down and stop, as if it did not then there's a risk of damage, even fire, as that cell voltage continues to increase. The snag is that when it does this the rest of the cells in the pack may well not be fully charged at all. You cannot allow current to flow through a cell that has reached cut off voltage, for fear of damage, therefore current cannot flow through it to charge the other cells. It is worth noting that the terminal voltage of every cell is measured by the BMS, primarily to allow it to detect the highest voltage cell during charging, but equally the lowest voltage cell during discharge (there's a similar lower voltage cut off that must be adhered to to keep cells safe and reliable)..
The original fix for this problem was to use cell shunt resistors that are individually switched across cells by the BMS when that cell reaches cut off. These resistors bypass the cell(s) that are fully charged, and allow current to flow to the remaining cells in the series stack. Obviously, the shunt resistors cannot pass a very high current, so charging slows down a lot as they activate. The BMS signals to the charger to reduce the current to a level that the pack can safely take at that time, using multiple sensors, including cell temperature, or can just use a crude CCCV method (this is far too crude for proper charge control, though). There are obviously flaws with using cell shunt balancing, it's not very efficient, for fast charging the shunts need to pass a high current (to reduce the time spent in the balance phase), the shunts will get hot as they dissipate power that would otherwise be heating up the cells, etc.
The method I now use (and I suspect that some EV manufacturers must be looking at using by now) is an active balancing BMS. This shuts off charging when one cell reaches it's set cut off voltage (and I shut mine off at 4.15 V, although the cells will take 4.2 V, makes the pack last longer), but, because the pack starts charging from a very well balanced state, it's pretty much balanced when the first cell shuts off the charge. The balancing works by switching a capacitor across each cell in turn, charging the capacitor from the highest voltage cell(s) and discharging it into the lowest voltage cell(s). After some time (off charge) the cells will all be at essentially exactly the same cell voltage, and hence SoC. This system is less wasteful of charge power (none gets wastes in switched cell shunt resistors), but it is slow (best done when the pack is idle) and also means that the pack doesn't quite ever get charged to a true 100%. This latter point can be a benefit, as packs tend to last a lot longer if rarely charged to 100%.
There are other ways of cell balancing. I had a long online row with the late Jack Rickard around 15 years ago, as he was insistent that "bottom balancing" was the only way to do it. It does work (Jack used it for some packs he built) but it's not any more efficient, and allowing cells to reach their lowest acceptable terminal voltage regularly is as bad as allowing them to reach their highest acceptable terminal voltage. IIRC, Jack did move away from doing this, although he never had the decency to admit he was mistaken.
All the EV BMS boards I've seen have a bunch of power resistors that look in 1-2 watt range, and mosfet/transistor switches for resistive balancing.
For example :
The BMW i3 is similar