Speak EV - Electric Car Forums banner

1 - 20 of 29 Posts

·
Registered
Joined
·
4 Posts
Discussion Starter #1
Hi guys,

I’m not an engineer, but I cannot help wondering about better and more efficient ways to charge my EV.

I was wondering, what’s the lowest DC voltage I can charge an EV at? Using CCS and CHAdeMO plugs? Would it be possible to say charge an EV at 100V DC? Although it would be slower than standard DC fast chargers. I was thinking it must be ok, since it’s going straight to the battery which is DC too?
 

·
I'm not crazy, the attack has begun.
Joined
·
28,561 Posts
Hi guys,

I’m not an engineer, but I cannot help wondering about better and more efficient ways to charge my EV.

I was wondering, what’s the lowest DC voltage I can charge an EV at? Using CCS and CHAdeMO plugs? Would it be possible to say charge an EV at 100V DC? Although it would be slower than standard DC fast chargers. I was thinking it must be ok, since it’s going straight to the battery which is DC too?
@cah197 has answered this particular question.

But it is slightly more complicated than that if you go into the deep deep technicalities.

Fundamentally you have to apply a voltage which is above the pack voltage, else you can generate no net current into the battery. The higher above the pack voltage it is, the higher the current and the quicker it charges. For rapid charging at 50kW typically the open circuit rest voltage of a 30kWh pack, at around 50% SOC, will be about 375V for an applied 400V.

Now, it is true to say that the bigger the voltage differential the less efficient is the charging. That 25V difference is the 'loss', it is the pure, direct manifestation of the battery charging inefficiency. In theory, therefore, reduce that delta voltage and you reduce the inefficiency, and that is certainly true but it is non-linear and in fact it is a bit like a rubber band where the more you stretch it the higher the force that you are pulling, so you might get twice the charge rate for only 50% greater losses.

There are two fundamental voltage drops in the cells [assume some form of C/LiMn type] as they charge, the voltage gradient across the electrolyte and the voltage drop across the SEI on the Li metal oxide electrode. The voltage drop is a function of lithium ion mobility, which change not only with applied voltage gradients and temperature (as we know well) but also the manner of the application. A 'pseudo-capacitance' will build up around the electrodes and as a result the voltage gradients may change across them according to time-varying behaviour of the current. There is some suggestion that it's actually more efficient, for example, not to stabilise a rectified current to a pure DC but leave an AC component in the charge current, which is likely to be the case, these chargers are not built as super-stable laboratory voltage sources.

It might be worth noting that it is the SEI that degrades with temperature first and if/as they do then the voltage gradient will drop and if you were to then try to force a big voltage gradient on the cell then it would cause electrons to migrate through the SEI and oxidise the electrolyte, which is precisely the primary damage mechanism of over-charging Li cells, and/or charging excessively hot cells. This is why cells have to be thermally managed during charging, and why some manufacturers have gone to, perhaps excessive, lengths to avoid the above.

(The above is all "AFAIK" because this sort of information is not contained in any one definitive study or technical document, it's an accumulation of jigsaw pieces.)
 

·
Registered
Joined
·
3,481 Posts
From a non-technical viewpoint, the most efficient and "best" way to charge the car is one that minimises use of non-renewable resources, minimises side-effect damage to air-quality/people's health etc, minimises demands on your wallet, etc. That's why I have solar panels on my house, a setup capable of maxing out at actual 4kW on a perfect summer day at noon. The lowest current, using a domestic 110 or 230V EVSE, is 6A, so for Uk call that 1.4 kW. My panels usually manage to output that for large parts of the day from March through October; yesterday was as near perfect panel-wise in early March as I've seen in last 4 years, and daily total was 10 kWh made, with 1.4kW available for 5 hours!

So something like a Zappi charger which is solar-aware can be set to run in super-eco mode, where it monitors the exported current, and when that's large enough (6A+), it starts charging the car "for free". If a cloud blocks the sunshine, charging is paused, and resumed when that cloud clears. Similarly charging is paused if you put the oven on etc.

I don't think you can get any more "efficient" than this approach. Any technical optimisations of the type that Donald & cah197 refer to are in a way secondary things; yes, optimise the actual detailed stuff, but recognise that so much more can be done to the infrastructure side of things.

Another approach is to get an Ev that's more efficient to begin with, so Tesla 3/Y, Ioniq, sleek-saloon shapes are rather better than the plethora of brick-like Suvs appearing from Jaguar, Mercedes, Audi, etc.
 

·
Registered
Joined
·
3,763 Posts
Also worth noting that Chademo and CCS are not plug standards but entire charging communication protocols that apply to charger and EV.



Eg even if you applied an acceptable DC voltage via a CCS plug, nothing would happen without the correct communication protocol.
 

·
Registered
Joined
·
151 Posts
You state you are not an engineer: would you expect a bricklayer to come up with a more efficient way of containing a transmittable virus NO!


You need a minimum level of understanding, education and specialism in any field. Do you high school level maths and physics? If yes, you could self educate yourself starting with some electrical engineering undergraduate texts, Yes?
He was asking a question and yet you reply with a lot of condescending drivel.
 

·
Registered
Joined
·
4 Posts
Discussion Starter #7
The voltage would always be matched to the pack voltage.

Why would lowering the voltage be
@cah197 has answered this particular question.

But it is slightly more complicated than that if you go into the deep deep technicalities.

Fundamentally you have to apply a voltage which is above the pack voltage, else you can generate no net current into the battery. The higher above the pack voltage it is, the higher the current and the quicker it charges. For rapid charging at 50kW typically the open circuit rest voltage of a 30kWh pack, at around 50% SOC, will be about 375V for an applied 400V.

Now, it is true to say that the bigger the voltage differential the less efficient is the charging. That 25V difference is the 'loss', it is the pure, direct manifestation of the battery charging inefficiency. In theory, therefore, reduce that delta voltage and you reduce the inefficiency, and that is certainly true but it is non-linear and in fact it is a bit like a rubber band where the more you stretch it the higher the force that you are pulling, so you might get twice the charge rate for only 50% greater losses.

There are two fundamental voltage drops in the cells [assume some form of C/LiMn type] as they charge, the voltage gradient across the electrolyte and the voltage drop across the SEI on the Li metal oxide electrode. The voltage drop is a function of lithium ion mobility, which change not only with applied voltage gradients and temperature (as we know well) but also the manner of the application. A 'pseudo-capacitance' will build up around the electrodes and as a result the voltage gradients may change across them according to time-varying behaviour of the current. There is some suggestion that it's actually more efficient, for example, not to stabilise a rectified current to a pure DC but leave an AC component in the charge current, which is likely to be the case, these chargers are not built as super-stable laboratory voltage sources.

It might be worth noting that it is the SEI that degrades with temperature first and if/as they do then the voltage gradient will drop and if you were to then try to force a big voltage gradient on the cell then it would cause electrons to migrate through the SEI and oxidise the electrolyte, which is precisely the primary damage mechanism of over-charging Li cells, and/or charging excessively hot cells. This is why cells have to be thermally managed during charging, and why some manufacturers have gone to, perhaps excessive, lengths to avoid the above.

(The above is all "AFAIK" because this sort of information is not contained in any one definitive study or technical document, it's an accumulation of jigsaw pieces.)
:D Wow.
Knowledge is power, certainly in this case. ?⚡

So just to understand better, I can charge in DC at a lower voltage and conserve power-loss. But it’s net net wave out would be under the net required. Meaning it would take a much longer period of time to charge up say at 100V DC on a Nissan Leaf. So a much longer charging time with also cells exposed to prolonged heat?

So in other words, don’t do it or you’ll eventually wear out your Nissan’s battery?
 

·
Registered
Joined
·
3,481 Posts
OP may not be an engineer, but that's no reason not to ask the question! Lets face it, most car buyers aren't engineers, but they have every right, and indded should, ask lots of questions about any stuff offered to them that they're not familiar with. And then it's up to those of us who do understand the details, to explain in simple language what's what. And by asking the question the OP has indeed educated himself, and has successfully shortcutted the long-winded process of wading through text books & sitting exams etc which the rest of us spent years doing.

Asking qns here is an excellent way to learn all this new stuff, very quickly. The amount I've learnt is huge, despite starting out as "technically competent" already when I began Evving 4 yeasr ago! This learning never stops; one day I might even be considered "wise" ? Just possibly?? :)
 

·
Registered
Joined
·
16,858 Posts
:D Wow.
Knowledge is power, certainly in this case. ?⚡

So just to understand better, I can charge in DC at a lower voltage and conserve power-loss. But it’s net net wave out would be under the net required. Meaning it would take a much longer period of time to charge up say at 100V DC on a Nissan Leaf. So a much longer charging time with also cells exposed to prolonged heat?

So in other words, don’t do it or you’ll eventually wear out your Nissan’s battery?
No, you don’t get to choose the voltage. That is done automatically.

The charger can lower the Amps supplied, but again that is done automatically according to what the car determines is acceptable for the State of Charge, temperature, etc.

So on DC you don’t get a choice. You plug in and the car does the rest.
 

·
Registered
Joined
·
3,481 Posts
:D... I can charge in DC at a lower voltage and conserve power-loss....
... Meaning it would take a much longer period of time to charge up say at 100V DC on a Nissan Leaf....
Think about charging a standard lead-acid 12V car battery that's run down, and sitting at 12.0V . This battery actually consists of 6 cells in series, each is 2V. But it's all packaged up, just like Ev batteries, so you can only get at the 12V and 0V ends. If you want to charge it up, you need something that's >12V to do this. Plug an 11V charger in, nothing will happen, except the 12V battery, already almost completely flat, will try to go even flatter by discharging itself back through the charger!

Typically a flat Ev battery is somewhere around 350V, up to around 400V. All depends just how many of the 4V lithium cells are stacked in series to make up the Ev battery. So you need voltages more like 450, 500, or more to charge; the faster you want to pump power into your Ev, the bigger this charger voltage must be to do this.

Don't even think of diving in to charge the individual cells at 4V, using lots of 4.5V chargers! The charging process has to be done uniformly for every cell, and even then at the end of the process when fully charged, there is another process called balancing, where a Battery Management System (BMS) looks at the individual cells very, very closely, and preferentially discharges a few, in order to keep the overall system having the same amount of charge in every cell. It's complicated stuff.

Back to your 100V DC charger; it isn't going to charge anything! So you need to convert that to more like 400V DC; this is an inefficient process to change DC voltages upwards! Easy to drop them down though. So you start with an AC supply, use a transformer (v efficient, 99% or thereabouts) to generate whatever AC output voltage you like, 500V is no sweat at all, then you convert that to DC using rectifier diodes (not quite as efficient as transformers, you'll lose a couple of volts), and then you use that to charge the 400V pack with no bother.

Large battery packs which need charging as fast as poss need a lot of power; the most efficient way to move electrical power down a wire is to maximise the voltage and minimise the current; this is because it's the quantity of current (I) and the resistance in the wire (R) that determine heating losses in the wire. Power loss this way = I * I * R.
This is why Porsche are going to 800V battery tech, rather than 400. Doubling the voltage halves the current for the same power moved, and in turn quarters the heat loss for the same sized wire, so this is good. Alternatively, they can make the wire 1/4 the cross-sectional area, so 1/4 the weigt & cost, and keep the heat loss the same, as the R has now increased to 4R while I has changed to I/2 for this 800V system.

So there are lots of considerations here. Why aren't all Evs using 800V already? Well, 400V DC is dangerous enough for technicians to work with, 800V is a bit more dangerous & likely to kill if misstreated. 800V also needs more expensive silicon chips, and better insulation. But it will probably happen because of cost savings in quantity of copper wire needed,
 

·
Registered
Joined
·
4,700 Posts
So just to understand better, I can charge in DC at a lower voltage and conserve power-loss.
No. Lots of technical words written above - probably confused you more.

The typical EV battery is about 375V output.
To charge it you have to push power into it.
To push power back into it the voltage you need to apply must be higher than 375.
 

·
Registered
Joined
·
16,858 Posts
Or in layman’s terms, think of Voltage (Joules per Coulomb) as water pressure and Amps (Coulombs per second) as flow rate.

I think it have that the right way round?
 

·
Registered
Joined
·
4 Posts
Discussion Starter #13
Well said.
OP may not be an engineer, but that's no reason not to ask the question! Lets face it, most car buyers aren't engineers, but they have every right, and indded should, ask lots of questions about any stuff offered to them that they're not familiar with. And then it's up to those of us who do understand the details, to explain in simple language what's what. And by asking the question the OP has indeed educated himself, and has successfully shortcutted the long-winded process of wading through text books & sitting exams etc which the rest of us spent years doing.

Asking qns here is an excellent way to learn all this new stuff, very quickly. The amount I've learnt is huge, despite starting out as "technically competent" already when I began Evving 4 yeasr ago! This learning never stops; one day I might even be considered "wise" ? Just possibly?? :)
?Cheers mate, well said. I’ve already learnt a lot today. Even about stereotypical EV drivers. But one must not make swift judgements on first impressions. ?

The most successful people (some of which drive expensive EV’s such as a Taycan), ask questions about the most basic things... Why or why not?
 

·
Registered
Joined
·
1,881 Posts
The BMS battery management system is vital in LI-Ion cells - and can be found in the cheapest of rechargable powerbanks from the £ shop never mind the complex one in an EV. Lead acid cells can get away without a BMS.

.So you start with an AC supply, use a transformer (v efficient, 99% or thereabouts) to generate whatever AC output voltage you like, 500V is no sweat at all, then you convert that to DC using rectifier diodes (not quite as efficient as transformers, you'll lose a couple of volts), and then you use that to charge the 400V pack with no bother.
I think you will find synchronous rectification is in use in the high power DC chargers as its a bit more efficient and at these power levels that matters! Rectifiers are now old hat :(

synchronous rectification pdf
 

·
Registered
Joined
·
3,763 Posts
He's not an engineer but also incapable of understanding the most basic of technical concepts: must be a bricklayer?
 

·
Registered
Joined
·
3,481 Posts
There are times when it's actually v valuable to have someone totally unfamiliar with a problem, or the tech involved, look at an issue.

Warning: long ramble, interesting story, complete red-herring re OP's qn! Exit now if uninterested!

One such instance nearly happened to me; years ago I moved from Mech Eng research, where I was playing with newfangled microprocessors, and I changed completely and went into IT with a large multinational. I started working on 3D Computer Graphics, about which I knew zilch. The big unsolved problem at that moment was making realistic images for "real-world" models; think Time-Team reconstructed Saxon hut lit by a candle inside it; how do you calculate how much light reaches any part of a wall, bearing in mind that diffuse light will bounce of many surfaces before arriving at that point; the possible paths for light are near infinite, and the shadows are gradual, not sharp.

People had been making pretty pictures of highly reflective, shiny things, this was called Ray-Tracing; you started with light coming out of your bright spotlight at a single point, and the computer follows lots of rays radiating out from there; when a ray hits the wall, that's the colour & brightness of the wall solved. If the ray hit a shiny sphere (lots of these in pictures at that time!) this ray then split into multiple more, less-powerful rays, emanating from the sphere's surface, and the process repeats. After a few reflections between mirror-surfaces & no wall hit yet, forget that ray, it's not going to be noticed. You might follow maybe 3 levels of reflection. You got nice images of things reflected in mirrors, like a hall-of-mirrors, but this scenario doesn't happen that often when you look around you! Diffuse lighting's what the real world sees.

You can see that this process generates an explosion of rays of light, and typically making these images would take about 1 hour on a powerful mainframe dedicated 100% to the task! All for one picture. Sharp shadows abounded, diffuse shadows were non-existent.

Back to my experience of this; I was highly impressed by the glossy pics, and used to attend graphics conferences to learn about all this new stuff coming along. I (and others around me, some of whom were world-experts in this stuff) considered the problem of diffuse lighting insoluble, as you would need to start with maybe 1000 times as many rays from your diffuse light source (strip-light, that kind of thing) followed by ray-tracing 100x as many rays to get teh required result. Totally beyond reasonable compute power. What chance of me, a total newby, cracking this?

Then I attended a conference, where someone had aproached this problem from a totally different direction; rather than consider each ray of light, instead they considered the energy, and where it went. They modelled a hotel lobby as a lot of smallish flat sheets (standard 3D modelling stuff, nothng new) and then worked out what other sheets could be seen from each sheet. Again, standard stuff for what's hidden from view etc. This visibility info defines how much energy will arrive at each sheet, supplied from whatever other sheets are visible. Arrange this as a matrix, invert it (standard math operation), and the problem is now solved! You get uniform colour & brightness for a sheet, and maybe you spot the change at edge of sheet? If so, split those sheets into multiple smaller ones & refine & enlarge the matrx, no sweat. A little fudging at smooth-joins of sheets is easily added in, standard graphics trickery!

When this paper was presented, the entire auditorium stood up & applauded - I was there. This was very much the holy grail of 3D Modelling reaslistically.

Thinking about it, I could have solved the problem with what I already knew when I moved into IT from Engineering! The approach is precisely the same as solving for stresses in indeterminate structures like welded steel building frames, where loading up one beam affects the ones it's joined to, then those beams affect their neighbours etc, and those in turn might affect the original beam you loaded - you can see how the connections & effects grow exponentially, just as they do in ray-tracing. There was a standard engineering text-book way to solve this problem - and it could have solved that long-term Graphics problem, had I but realised it! But someone else did, so they're famous, not me!
 

·
Registered
Joined
·
3,481 Posts
Spiny, thx for the pointer to synchronous rectification - hadn't met the term before; reading up in wikipedia has answered a qn I had in my mind. I've wanted to use this technique to control mains current, and my son & I have made a system using a full rectifier bridge with a single mosfet in the DC side to control current up to 13A, but as you say the heat loss in the diodes is an issue. I'd considered replacing the 4 diodes with 4 mosfets, but most of the one's I've met are designed for DC motor control, and have a built-in freewheel reverse-diode to protect them against reverse voltages. So I've never known what reverse voltage a mosfet can stand! Which is why we dodged the problem by letting the diode array handle the reverse voltages first. This suggests that we can instead use intelligent switching of 4 mosfets arranged in full-bridge diode rectifier layout to reduce our heat losses, we just need mosfets without the freewheel diodes in! Not sure what gate voltages etc we need to arrange, but that's all low-power stuff so is eminenty doable. Do we need full-wave 4-mosfets? Need to think about that as well. Cheers, mate!
 

·
Registered
Joined
·
4 Posts
Discussion Starter #18
There are times when it's actually v valuable to have someone totally unfamiliar with a problem, or the tech involved, look at an issue.

Warning: long ramble, interesting story, complete red-herring re OP's qn! Exit now if uninterested!

One such instance nearly happened to me; years ago I moved from Mech Eng research, where I was playing with newfangled microprocessors, and I changed completely and went into IT with a large multinational. I started working on 3D Computer Graphics, about which I knew zilch. The big unsolved problem at that moment was making realistic images for "real-world" models; think Time-Team reconstructed Saxon hut lit by a candle inside it; how do you calculate how much light reaches any part of a wall, bearing in mind that diffuse light will bounce of many surfaces before arriving at that point; the possible paths for light are near infinite, and the shadows are gradual, not sharp.

People had been making pretty pictures of highly reflective, shiny things, this was called Ray-Tracing; you started with light coming out of your bright spotlight at a single point, and the computer follows lots of rays radiating out from there; when a ray hits the wall, that's the colour & brightness of the wall solved. If the ray hit a shiny sphere (lots of these in pictures at that time!) this ray then split into multiple more, less-powerful rays, emanating from the sphere's surface, and the process repeats. After a few reflections between mirror-surfaces & no wall hit yet, forget that ray, it's not going to be noticed. You might follow maybe 3 levels of reflection. You got nice images of things reflected in mirrors, like a hall-of-mirrors, but this scenario doesn't happen that often when you look around you! Diffuse lighting's what the real world sees.

You can see that this process generates an explosion of rays of light, and typically making these images would take about 1 hour on a powerful mainframe dedicated 100% to the task! All for one picture. Sharp shadows abounded, diffuse shadows were non-existent.

Back to my experience of this; I was highly impressed by the glossy pics, and used to attend graphics conferences to learn about all this new stuff coming along. I (and others around me, some of whom were world-experts in this stuff) considered the problem of diffuse lighting insoluble, as you would need to start with maybe 1000 times as many rays from your diffuse light source (strip-light, that kind of thing) followed by ray-tracing 100x as many rays to get teh required result. Totally beyond reasonable compute power. What chance of me, a total newby, cracking this?

Then I attended a conference, where someone had aproached this problem from a totally different direction; rather than consider each ray of light, instead they considered the energy, and where it went. They modelled a hotel lobby as a lot of smallish flat sheets (standard 3D modelling stuff, nothng new) and then worked out what other sheets could be seen from each sheet. Again, standard stuff for what's hidden from view etc. This visibility info defines how much energy will arrive at each sheet, supplied from whatever other sheets are visible. Arrange this as a matrix, invert it (standard math operation), and the problem is now solved! You get uniform colour & brightness for a sheet, and maybe you spot the change at edge of sheet? If so, split those sheets into multiple smaller ones & refine & enlarge the matrx, no sweat. A little fudging at smooth-joins of sheets is easily added in, standard graphics trickery!

When this paper was presented, the entire auditorium stood up & applauded - I was there. This was very much the holy grail of 3D Modelling reaslistically.

Thinking about it, I could have solved the problem with what I already knew when I moved into IT from Engineering! The approach is precisely the same as solving for stresses in indeterminate structures like welded steel building frames, where loading up one beam affects the ones it's joined to, then those beams affect their neighbours etc, and those in turn might affect the original beam you loaded - you can see how the connections & effects grow exponentially, just as they do in ray-tracing. There was a standard engineering text-book way to solve this problem - and it could have solved that long-term Graphics problem, had I but realised it! But someone else did, so they're famous, not me!
? I know exactly what your talking about. I went from Modern History, to Fintech. Actually worked quite well.
 

·
Registered
Joined
·
1,087 Posts
I'd considered replacing the 4 diodes with 4 mosfets, but most of the one's I've met are designed for DC motor control, and have a built-in freewheel reverse-diode to protect them against reverse voltages.
[/QUOTE
Remember MOSFETs are bidirectional, so you can use them in the direction of the internal diode to reduce losses
 

·
Registered
Joined
·
3,481 Posts
Hang on a moment! If I'm going to use a mosfet to selectively allow forward conduction when the AC peak is nicely positive, and I want a positive DC coming out, then that same mosfet must block the reverse AC voltage when that's gone negative. So "pure" Mosfets per se cannot be bidirectional. Problem is that plenty are bidirectional, because the mfrs have included a reverse-biassed Schottky diode to allow the freewheel currents & spikes to dissipate safey when coils have been turned off, rather than generate kilovolts (? killervolts !!?) when the turned-off mosfet tries to halt the flowing current instantly. So I reckon these synchronous rectifiers need the mosfets which come without that added-in Schottky diode. Or have I gone badly astray? I'm already waaay off topic, as it is!
 
1 - 20 of 29 Posts
Top