< COVID-19

Warning: This page is a bit of a mess. I'm not sure if anyone will read it at all. I'll will gladly make more work on this if people seems interested. Let me know in the talk pages.

Method

Computer assisted Step-by-step minima optimization via recursion in a 9-Dimensional space. That was to sound smart, technically it boils down to make millions of simulation and see which one fit best the data we have. Some of those curves were achieved by locking some of the parameter at specific values to see how plausible is the curve as explained above.

Suggested model

July 18

$m['a']=-125.65159588571;//num of days before feb 16 (initial case)
$m['b']=1.0684323078723;//growth rate (per day)
$m['d']=0.0048137171567879;//Maximum Impact of mitigation
$m['e']=0.82834719039164;//Likely % of people susceptible on feb 16
$m['f']=0.00046376433202448;//% of people becoming susceptible (remote villages etc..)
$m['g']=0.99327190491309;//% of immune people keeping their immunity (daily)
$m['h']=10553.565694855;//Death count before feb 16
$m['i']=0.0072242127659876;//seasonnal amplitude
$m['c']=426383.87786083;//Total susceptible

July 28

I had the idea to enforce the h parameter to be true. This is the simulation being coherent with itself. I'm not sure if it's a strike of genius or folly at the moment. It would be a good news as the first few days of italy's death would seem to be over counting of previous death. It's the new optimistic approach. It is noteworthy to mention that this new method for the first time do not enforce anything prior to the data, it only enforce the death count with itself. EDIT: I just called 8000 extra death optimistic. Never trust an epidemiologist with a "good news".

$m['a']=-159.17168234663;//num of days before feb 16 (initial case)
$m['b']=1.0722850170194;//growth rate (per day)
$m['d']=0.00034437131117403;//Maximum Impact of mitigation
$m['e']=0.57673718801671;//Likely % of people susceptible on feb 16
$m['f']=0.00033473039718786;//% of people becoming susceptible (remote villages etc..)
$m['g']=0.99386795745051;//% of immune people keeping their immunity (daily)
$m['h']=18809.424215614;//Death count before feb 16
$m['i']=0.0072242127659876;//seasonnal amplitude
$m['c']=573024.27734544;//Total susceptible

Jan 2 2021

Hello, I calculated in April that 45% of the population had a resistance. There is also clear evidence of a new variant of Covid-19 that seems to spread like wild fire. I'm tempted to add 1+1 together and assume that Covid found it way in the other half of the population.

Is there a chance that we have them inverted ? This is that the 2nd variant would be responsible for the 2nd wave (which deaths in italy peaked around dec 1). And that the 2nd wave of the first variant is yet to peak ?

November 17

I am improving technical difficulties since the start of the 2nd wave. I was expecting it a little later (30-60 days). There are many things that are difficult to model.

  1. Varying human attempts to affect the curve.
  2. Improvement in therapy.
  3. Improvement in prevention techniques.
  4. Faith in the early date data.
  5. The amount of people affected. I used to have success in assuming a % of new people affect in region, but as the time pass and the complexity of the situation increase it doesn't seem applicable anymore.

While I'm struggling to push the model to a point where I can predict the exact month where the wave happen, I am extremely confident about the amplitude of the waves. The long term projections doesn't change much.

November 16

Source : SARS-CoV-2 RBD-specific antibodies were detected starting from September 2019 (14%) (that's about 170 days before feb 16th)

October 19

Stubborn model. Doesn't want to recognize any human intervention at all. Also, the more data I throw at it, the more it start to ignore the first peak of data. I think I might be measuring the improvement in treatments ? I might be back in a few days with yet another dimension to the model.

September 20

The july 28th model suggested an increase in case since sept 1 or so. The actual data also seem on the rise. I just came here to continue the projection. The second wave is coming as far as I can tell. The extra data only allow me to make imperceptible change to the model. I'm not even making a graph, but here is the data for hard core curious.

$m['a']=-159.17168234663;//num of days before feb 16 (initial case)
$m['b']=1.074216578214;//growth rate (per day)
$m['e']=0.56560140978122;//Likely % of people susceptible on feb 16
$m['f']=0.0002151190810254;//% of people becoming susceptible (remote villages etc..)
$m['g']=0.99386795745051;//% of immune people keeping their immunity (daily)
$m['h']=18809.424215614;//Death count before feb 16
$m['i']=0.011592740743;//seasonnal amplitude
$m['c']=509124.64856267;//Total susceptible

I guess the reduction of 64k suceptible is a good news. However, I also come here to confirm the second wave and the inevitability of it.

Old model collection

While I believe the latest projection is likely the most accurate, I also want to show what my simplified method was already able to achieve in april. After all, I'm just a stanger on the internet. But showing the accuracy I obtained even at early date with partial data hopefully shows the kind of accuracy I can get with this. So enjoy the previous estimates I did here.

april

My suggested model in red pose the first death in Italy on dec ~10. growth (daily): 1.0785697466622, susceptible to die: 317269 Italians.

By July death count: 40519. Distancing impact so far: 1500 deaths If maintained up to july??: 2784 deaths

The R0 seems high. ~2.5 but also 45% of the population seem resistant cutting the pool of victim quite a bit. therefore 13% of the susceptible population dying.

Rerun may 9

$m['a']=-93.496587895621; //~dec 15 $m['b']=1.0779228959116; //Growth rate per day (R0 of 3 assumign 15 days of infectivness) $m['d']=0.0084969690972043; //impact of distancing on $m['b'] $m['c']=317269.64892627; //Maximum susceptible deaths.

Troubled by the exact same susceptible number, i tried to kick the 'b' parameter in another position (1.1), and run the script setting the the susceptible amount first in the approximation, i land on vaguely different numbers : $m['a']=-85.562622590657; $m['b']=1.0849025277116; $m['d']=0.016467854572557; $m['c']=308028.78536532;

Everything fought back in a very similar position. Since the distancing impact was higher on that model, I kept the no distancing like for this one.

June 22

Hi I'm back !

<?php
$m['a']=-125.65159588571;//num of days before feb 16 (initial case)
$m['b']=1.0852279984699;//growth rate (per day)
$m['d']=0.0093283765873803;//Maximum Impact of mitigation
$m['e']=0.56407657414306;//Likely % of people susceptible on feb 16
$m['f']=0.0011272578709613;//% of people becoming susceptible (remote villages etc..)
$m['g']=0.99595695951109;//% of immune people keeping their immunity (daily)
$m['h']=11255.0881;//Death count before feb 16
$m['c']=725890.01044992;//Total susceptible

--- 2 (trying to seek for a credible mitigation impact)
$m['a']=-141.42197815993;//num of days before feb 16 (initial case)
$m['b']=1.0706780360658;//growth rate (per day)
$m['d']=0.01887266008676;//Maximum Impact of mitigation
$m['e']=0.79285388387757;//Likely % of people susceptible on feb 16
$m['f']=0.0015038745097599;//% of people becoming susceptible (remote villages etc..)
$m['g']=0.99745179158056;//% of immune people keeping their immunity (daily)
$m['h']=10553.565694855;//Death count before feb 16
$m['c']=747666.71076345;//Total susceptible

It became obvious as I was simulating on more and more data that the model I had was too simple. There was more susceptible people entering the model over time. There were people infected before feb 16, etc. There is many other factor i can simulate. Sadly, the more I add the more it become art rather then hard science as nothing of this have been tested. So take the numbers here with skepticism.

2 things worth mentionning about this model. Saddly it seems to stabilise on a parameter g of 0.996%, this mean that saddly 0.004% of the immune people are losing the immunity daily. This mean people could be infectious every year. Also the param a seems to have backed an entire month since I allowed the model to consider the param h. That would mean a first case in italy around oct 15.

Quote from wikipedia i found after running this simulation : In the month of March, 10,900 excess deaths have been estimated, that have not been reported as COVID-19 deaths.[479] (see my parameter H projection).

Lesson

When I did the apr 7 evaluation, I assumed that that data before march 27 was noise. There was an apparent plateau from march 21 onward, but the spike of death on march 27 made me believe that it was not usable and that I should start from there. in retrospect, with the may 9 rerun, even if it's still instructed to only account for march 27 and later, the curve seems to accommodate nicely the data from march 21 and consider both march 21 and march 27 as both abnormally high spike. If i had used the data from march 21, my apr 7 model would be even more accurate. I'm still happy with the early projection, it shows that my model worked.

Other idea toyed with

Finally, I'm adding 2 curves. One orange with distancing and one purple with no distancing. The distancing indeed save 10k lives in those models, from 15k to 25k total deaths. The problem with those curves is that they suppose a very small pool of 75k susceptible max people. And a growth rate that is very hard to explain (Even less given the tiny pool of people at risk). This is the best social distancing can do, anyhow.

Today I pushed the model to find my worst plausible case (in cyan). It require a R0 of 12 without distancing and 2.5 with distancing and a very specific amount of people susceptible (186000). I don't think our data match with that (specially the high starting R0), but i wanted to try my best to reach worst cases that some other source have predicted. Death without distancing : 40k with distancing : 20k . But i want to stress again, this model is stable only if I lock the social distance impact to that high level and cherry pick the perfect susceptible amount to reach the perfect recipe for disaster.

Deaths and IFR

The june 22 simulation was expecting 53205 deaths per 392.43257826601 days (whatever the mitigation chosen). It's a little bit the point of this page to show that we can accurately predict the deaths without having to estimate the infected at all. But since I heard the CDC now google IFR to make a mean out of them, I will guesstimate too. I can estimate the R0 by taking param B and powering it by the number of day and infected is infectious. I'm estimating 12 days. Using the CDC 6-6-6 rule. I obtain an R0 of 2.2673. Knowing that Rt will oscillate around 1, I can infer that 55.9% of italians will be resistant to COVID soon. The highest amount of infected given a population of 60.36 million is 33.73 million. If that's the case the IFR would be around 0.1577%

IFR=0.1577% (with covid)

However, we need to emphasize that if 55% of the population was to be infected and they were to be tested positive for about a month. from the 1.07% of yearly death in italy, 25,680 Italians would have died WITH covid and not really FROM covid. Factoring for this our IFR could get as low as 0.0761%

IFR=0.0761% (from covid)

As I understand, this last estimation is how other coronaviruses and flu are usually estimated. And yes I reach roughly the same results. Covid is a coronavirus (plot twist)

This being said, it is important to remember that an enormous amount of the world population _will be infected_. It's still a world health crisis.

Interesting discovery

I'm not sure how, I think it have to do with the curve in the data. But I seem to be able to see that the absolute maximum amount of Italian susceptible of dying is about 300'000. However, if i use the current age pyramid of Italy and compile it with the current CFR of the decease we could get a total of 4'869'075 deaths. It could mean 2 things. 1) Some of the italian are already immune or 2) they have mild cases that doesn't get tested.

I can't know for sure but all i know is only 6.1% of Italians should be concerned about the actual CFR. Iluvalar (discuss • contribs) 00:33, 11 April 2020 (UTC)

Social distance efficacy

I'm thinking today, if 60-70% of the population must be immune to stall the spreading. By social distancing we may remove 30% of the risk (or more precisely, half the risk for 60% of us) and stall the curve at a lower place. But it is paradoxically increasing the risk to eventually get sick for people who have to stay active. The nurses who take care of the elderly now have a 100% chance to get eventually the virus. Less chance this week, but more eventually. At the end it just guarantee to our elderly that each and every nurse taking care of them will have the virus at some point instead of the basic 70%. This could explain why the models i'm using kinda refuse to give a too big effect on social distancing. The effect is positive, but some of it backfire at us.


Long term ?

I managed to include a seasonal effect to the simulation. I'm not fully confident, but it's a probable scenario. ~65369 deaths per year. In this one.

September 20 review : 51424 per year. Spike a little more scary but overall lower curve. November 19 : Was obtained by insisting A LOT about not missing the recent death. It is therefor not an overall increase of the total death. Even if it looks slightly higher due to how I obtained it.

Invalid Data

Here is the confirmed case in China starting at jan 22. And 2 curve of theoretical exponential growth. As we can see, if there was a unique case in Dec 1 or even Nov 1, we would have got at best 1 single day of true data somewhere at the start of FEB 1. The rest of the data of confirmed case would be under the real curve since then. And by now orders of magnitude lower then the real cases.

The data from any other point of confirmed cases data only inform us on the amount of cases tested. And could not reflect any real progression curve of the virus.

What does it mean?

All the confirmed case data we have is pointless and only a depiction of our own test capacity until we hit a change in the curve. The usage of the test production as indicator of a curve or any model is buggus and ill-informed. I see too many governments and scientists using the confirmed cases curve. 2 exponential curves can only meet at one point (2, but that's obviously impossible in this case).

The first usable data is the one obtained when the curve drastically change. When the graph switch from showing the amount of tested and showing the amount testable. I believe we can clearly see it in Italy's death data since March 21.

This mean that attempt to infer the effectiveness of the social distancing using the previous curve is futile.

Code

Per request but it's hugly. If you want to add a premise or test a scenario, it is likely faster to contact me here and ask me to edit that soup myself to spit more projection.

<?php
//Covid sim
//Warning : This code only find the local minima to minimize the distance with the Real data
// It does not interpret it and can and will spit non-sens given the premises fed to it.
error_reporting(E_ALL & ~E_NOTICE);
$ital2=[0, 0, 0, 0, 1, 1, 1, 4, 3, 2, 5, 4, 8, 5, 18, 27, 28, 41, 49, 36, 133, 97, 168, 196, 189, 250, 175, 368, 349, 345, 475, 427, 627, 793, 651, 601, 743, 683, 712, 919, 889, 756, 812, 837, 727, 760, 766, 681, 525, 636, 604, 542];
$italydeaths=[627, 793, 651, 601, 743, 683, 712, 919, 889, 756, 812, 837, 727, 760, 766, 681, 525, 636, 604, 542];
function model2($m,$daymax){ //This is just good old SIR not even integrated 
	$day=$m['a'];
	$case=1;
	while($day++<$daymax){
		if($day>-5){
			$case=$case*($m['b']-$m['d'])*($m['c']-$totcase)/$m['c'];
		}
		else{
			$case=$case*$m['b']*($m['c']-$totcase)/$m['c'];
		}
		
		$totcase+=$case;
	}
	return $case;
}
function approx_improv(&$m,$data,$index,$adj=1.1){
	$check=approx_diff($m,$data);
	$lastcheck=$check;
	$initval=$m[$index];
	//echo '<br>check:'. $check .' '. $adj;
	while(!$stop and $x++<100){
		$m[$index]*=$adj;
		$check2=approx_diff($m,$data);
		//echo '<br>--check2@'. $m[$index] .'='. $check2;
		if($check2>$lastcheck){
			$stop=1;
			$m[$index]/=$adj;
		}
		else{
			$lastcheck=$check2;
		}
	}
	unset($stop);
	while(!$stop and $x++<100){
		$m[$index]/=$adj;
		$check2=approx_diff($m,$data);
		//echo '<br>--check2@'. $m[$index] .'='. $check2;
		if($check2>$lastcheck){
			$stop=1;
			$m[$index]*=$adj;
		}
		else{
			$lastcheck=$check2;
		}
	}
}
function approx_diff($m,$data){
	$ret=400/model2($m,-26);
	if($ret<1){
		$ret=1/$ret;
	}
	$ret=pow($ret,2); //I used this bit to create a false data point
	$ret=1;
	foreach($data as $no => $val){
		$modelval=model2($m,$no);
		//echo '<br>'. $no .','. $val .'|'. $modelval .')';
		$diff=$modelval/$val;
		if($diff<1){
			$diff=pow(1/$diff,2); //This cheeky power here make values under the curve less plausible
		}
		$ret*=$diff;
	}
	return $ret;
}

$m['a']=-99.190530098464; //Day of 1st death
$m['b']=1.0785697466622; //Growth per day 
$m['d']=0.0073135735980529; //impact of distancing on $m['b']
$m['c']=317269.64892627; //Maximum sucseptible deaths.


$x=0;
while($x++<5){
	$adj=1.03;
	approx_improv($m,$italydeaths,'a',$adj);
	approx_improv($m,$italydeaths,'b',1.0003);
	approx_improv($m,$italydeaths,'c',$adj);
	approx_improv($m,$italydeaths,'d',1.0003);
}
echo "<br>\$m['a']=". $m['a'];
echo ";<br>\$m['b']=". $m['b'];
echo ";<br>\$m['d']=". $m['d'];
echo ";<br>\$m['c']=". $m['c'] .';<br>';
$x=0;
while($x++<count($ital2)+100){ //+100 so the the sum of deaths can be calculated
//while($x++<365){
	$day=$x-count($ital2)+count($italydeaths);
	$val=model2($m,$x-count($ital2)+count($italydeaths));
	//echo '('. $day .'):';
	//$x+=30;
	echo floor($val) .',';
	$sum+=floor($val);
}
echo '<br>sum:'. $sum;
This article is issued from Wikiversity. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.