Thursday, October 3, 2019

Firefox Normandy

Firefox through the Normandy feature provides an option for unsolicited/ automagic updates to default values of a Firefox (targetted) instance. For more on the risk this poses take a look at the ycombinator threads.

To turn off Normandy in Firefox use the advanced settings route: about:config > app.normandy.enabled = false.

Update 1 (23-Oct-19):
 - Principally Mozilla (Firefox) have always been in favour of user privacy.

Saturday, September 21, 2019

Last Petrol Car

In the year 2024 my present BS-III Hyundai petrol (BS-III) hatchback would reach its end of life, 15 years after its first drive out of the showroom. Given all the buzz from the Electric Vehicle (EV) space, this would very likely be my last petrol car. At some level, most of us have next to zero attachment with the fuel that powers the vehicle under the hood (petrol, cng, electricity, etc.). What we care about is that the new vehicle shouldn't be a downgrade in terms of reliability, comfort, features, looks, pricing, drivability, power, pickup, etc and an increase in terms of purchase & running costs. 

Battery operated EVs seem to be getting better by the day. There's good traction seen in the three-wheelers (battery operated autos/ totos) space. Two- & four-wheelers are likely to hit mass markets soon, with pricing that would be lucrative (perhaps tax incentivized). Further, widespread infrastructural & service support need to be introduced to give people the confidence to switch to EVs.

Yet, at the moment, EV technologies - battery, chargers, fire & safety protocols, instrumentation, cabling & connectors, etc. - are at early-to-mid maturity level. Driving range per charge is about 100 Kms for the entry segment cars which is not enough. It's quite common for people to drive ~150 Kms daily for work. On highways, the range could be much more. So a sub-300 Km range would simply not do!

At the same time, the mass market pricing levels (INR 3 to 6 lacs) should not be breached in any way. The existing coverage of mechanics & service centres of various manufacturers (Maruti, Hyundai, Mahindra, Tata, etc.) needs to be upgraded to support EVs as well.

Reliable electricity remains a constraint in most cities including the metros. On the generation side, renewables would need a wider push. Residential solar rooftop set-ups could be one area of focus. Through such set-ups, individual households & complexes could achieve self-sufficiency for their growing energy needs, including the EV burden/ load (@20-30 Units for full charge per vehicle X 30 days = 600-900 units per vehicle per month). Standard practices to popularize rooftop solar set-ups employed the world over such as PayGo models, incentives/ tax breaks, quality controls, support & maintenance, etc. should be introduced here as well. If possible, it would be great to have the EVs themselves equipped with solar panels on the body to auto-charge whenever required under direct sunlight. Eagerly waiting for these clean green technologies to evolve and make inroads very soon!

Update 1 (09-Oct-19):
 - An assessment of the current state of EV adoption in India by Business Standard.

Update 2 (23-Oct-19):
 - Bajaj Chetak to be relaunched in an Electric avatar.
 - Blu-Smart all electric cabs visible on Delhi roads.

Thursday, September 19, 2019

Renewable Energy In India

India holds great potential in the renewable energies space. We have ample opportunities to generate all our present and future energy needs from sources like solar, wind, water and biomass.

From an energy generation capacity from renewables pegged at ~60 GW (in 2017) we are targetting to reach about 175 GW (100 GW Solar, 60 GW wind, 10 GW biomass, 5 GW small hydro power) by 2022. Which would be close to 50% of our entire energy needs. With ground work for mass adoption of Electric Vehicles (EV) getting traction, our demands for power and generation from renewables will need to scale up even further. To the extent that we may become energy surplus one day and be able to export to the neigbourhood. For a sneak peak into the state of the art from the world of renewables, head over to the Renewable Energy India (REI) Expo 2019 currently underway at the Knowledge Park II, Greater Noida.

The REI-2019 has exhibitors from leaders in the renewables space such as China, Bangladesh, France, Germany, India, Israel, Netherlands, Saudi Arabia, Singapore, Slovakia, South Korea, Taiwan, Tunisia, UK, USA, Vietnam, etc. They are showcasing their product portfolios from solar & wind power devices to installations on floating & permanent structures, from switching & grid apparatus to connectors, from inverters & batteries to EVs, and more. Expo timings are from 10 am to 6 pm. Walk-in as well as online registrations are allowed. Go see the future!

Update 1 (21-Sep-19):
- Listen to what Greta Thrunberg has to say & check out her zero-carbon boat

Update 2 (23-Oct-19):
- Coal to continue powering India's energy requirements for decades - Swaminomics

Wednesday, September 18, 2019

Sim Swap Behind Twitter CEO's Account Hack

There was a lot of buzz about the recent hacking incident of the Twitter CEO, Jack Dorsey's account. The key thing to note is that the hack was effected by a sim swap fraud, wherein a fraudster tricks a mobile carrier into transferring a number. Your mobile being the key to your digital life & hard earned money gets completely compromised through a fraud like sim swap.

SIM swap fraud can be done by some form of social engineering and stealing/ illegally sharing personal data of user used to authenticate with the telecom operator. The other way is by malware or virus infected app or hardware taking over the user's device, or by plain old manipulation of personnel of the telecom company through pressure tactics, bribes, etc.

In order to limit cases of frauds DOT India has brought in a few mandatory checks into the process of swapping/ upgrading sim cards to be followed by all telecom operators. These include IVRS based confirmation call to the subscriber on current working sim, confirmation SMS to current working sim, and blocking of SMS features for 24 hours after swapping of sim.

The window of 24 hours is reasonably sized to allow the actual owner to react in case of a fraud thanks to these checks. Once they realize that their phone has mysteriously gone completely out of network coverage for long, and doesn't seem to work even after restarting and switching to a location known to have good coverage alarm bells ought to go off.  Immediately they should contact the telecom operator's helpline number/ visit the official store.

At the same time, the window of 24 hours is not excessively long to discomfort a genuine user wanting to swap/ upgrade. Since SMS services remains disabled, SMS based OTP authentication for apps, banking etc. do not work within this period of time, thereby preventing misuse by fraudsters.

Perhaps, telecom regulators & players elsewhere need to follow suit. Twitter meanwhile has chosen to apply a band-aid solution by turning off their tweet via SMS feature post the hack. Clearly a lot more needs to be done to put an end to the menace.

Thursday, August 29, 2019

What? A Man In The Middle!

Well yes, there could be somone intercepting all your digital & online traffic, unless proper precautions to secure them are in place. The focus of the article is not about how to be the man-in-the-middle (mitm), but to prevent getting snooped on by him. Here are some basic online hygiene techniques to follow to remain safe, as far as possible.

To begin with let's look at the high level components that are a part of the digital traffic:
  • Device: Phone, pad or desktop
  • App: Running one the device (Whatsapp, Fb, Gmail, Browser, etc.)
  • Server: Server components of the service provider, organization, etc. that is listening to & providing some service to the app
  • Network: Wired, wireless, hybrid channel through which the digital packets (bits) travel between the device & the server
Of course, there are many other components in play, but for now we'll keep things simple.

Device & Apps
The user's device is the first & most common point of vulnerability in the chain. These get infected by viruses or malwares. Some defences include:
  • Being particular about not installing any untrusted, unverified software. Installing only reputed apps and software that are actively maintained & updated that patch/ resolve existing vulnerabilities inherent in its components or dependent libraries. App developers themselves must be well conversant with standards (secure cookie, etc.) and industry best practices such as OWASP Top 10 and so on, to avoid building poor quality and vulnerable apps/ software.
  • Keeping devices updated. Staying up to date offers the best defence against recently detected vulnerabilities, which the manufacturers & software vendors rush to fix.
  • By not clicking on unverified links or downloads.
  • Making use of conservative settings for all apps, with absolutely minimal privileges. Company provided default permissions are found to be too lax & liberal in many cases. So review what permissions are present & change them to more minimal settings. For instance why the hell would a SMS messages app need to access phones camera?

    In order to avoid crashing your phone, make piece-meal changes to the app settings & test. If it works great. If not, make a note & revert! Later check the privileges that you felt were unnecessary and caused problems.

    Too much work? Well, for the moment until the device's operating system software undergo major privacy focussed revisions, there doesn't seem to be much of an alternative.
  • Sticking only to the manufacturer specified software repositories for updates.
  • For Windows based/ similar systems installing an updated anti-virus is mandatory. Use the free (for personal use) Avast anti-virus if not anything else. Better still switch to a more robust *nix based OS.
  • If you are a traditionalist using browsers, Mozilla Firefox set up with conservative & minimal privacy settings scores significantly over its competitors, that are mostly data capturing ad machines. If possible, contribute to help keep Mozilla, a non-profit, afloat.
  • Physically secure your device with a password/ pin & do not allow any unknown person to use the same. In case temporary access is to be provided specially on desktops create guest credentials for the user with limited privileges.

Server
This is the where the real action to process the user's request takes place. Whether it is an info about the weather, sending emails, getting chat notifications, doing banking transactions, uploading photos, etc. the user sends the request along with the data to the server to perform the necessary action. The server itself being a device (mostly a collection of devices database, web-server, load-balancer, cloud service, etc.) is vulnerable to all the above set of devices & apps risks plus many others that sever engineers & operation teams work to harden against.

Standards to be employed, learnings & best practices are shared widely by most of the leaders working in server side technologies via blogs, articles, conferences, journals, communities, etc. The cloud vendors (Amazon AWS, Microsoft Azure, Google Cloud, Rackspace, and so on) are specially active in this regard. They are busy pushing the bar higher with improvements to the various server technologies being rolled out regularly.  

There are some open source tools available to check the different aspects of the server set-up. For instance the Owasp Test for HSTS (HTTP Strict Transport Security ) & SslLabs Server Rating Guide provides details on the requirements for the server's SSL certificate used to encrypt data. SslLabs also has an online tool to test & rate the set up of any publicly accessible server & highlight potential weaknesses.

Network
Between the user's device and the server lies the network through which the data and instructions flow. The network may include wired, wireless or a combination of components (routers, hubs, gateways, etc.). The best form of defence against the man-in-the-middle attack is to ensure that only strongly encrypted data is sent over the network (end-to-end (e2e) encryption).

The communication between the user device & server takes place via a secure HTTPS protocol using a signed SSL certificate issued via reputed certificate authority. This ensures that as long as the certificate's private key (known only to the server) remains secure the end-to-end (e2e) encryption between user's device & server works.

Yet, there are ways in which a server set-up for HTTPS communication might end up downgrading to an insecure HTTP protocol or being compromised (SslLabs Server Rating Guide). The best defence against this is to set-up the server to solely work over HTTPS, by setting it up to work with the HTTP Strict Transport Security (HSTS) protocol.

Once HSTS is enabled on the server, any non-secure HTTP requests to the server is either rejected or redirected to the secure HTTPS channel. All insecure HTTP requests from the user's end to the server are automatically switched over to HTTPS & connection between client and server dropped in case of a problem with the server's certificate. So HSTS protects against the various man-in-the-middle attack scenarios such as protocol downgrade (to insecure HTTP) & session hijacking attack.

Beyond e2e encryption & HSTS, the server address lookup process done by the user's device could also get manipulated (by ARP spoofing within LAN & DNS spoofing). In place of the genuine address, user data could be directed to a fake server's address. Performing address lookup securely via DNSSEC provides a good mitigation strategy for DNS vulnerability.

These basic requirements are essential for managing safety of user's data. Yet, in this eternal tussle between the yin and yang of security a lot more needs to be done & certainly the end goal hasn't been reached. As new threats emerge we can only hope to collectively strengthen our defences and stay alert & updated to remain secure. 
 
 

Monday, August 26, 2019

Dconf, Gsettings, Gnome Files/ Nautilus Refresher

Dconf is the Linux key-based configuration system that provides the back end to Gsettings to store configurations. Dconf settings can be updated via dconf-editor  and/ or via the gsettings command line. Gnome Files/ Nautilus settings for instance is Dconf based & can be accessed/ updated with these tools.

Tuesday, April 30, 2019

996.ICU

About time the revolution reaches our shores. Concerns from the 996.ICU movement most certainly apply here. There's additionally the grind from the 3+ hours of daily travel time between workplace & home in the typically over-crowded metros.

The demands for a well balanced work life (wbl) are totally justified. 955 schedule, flexi-hours, work from home, are not concepts to be just spoken of but to be put to practice. Let's work smart, live healthy & be happy!

Tuesday, March 26, 2019

Opinions On A Topic

Media agencies of the day are busy flooding us with news - wanted, unwanted, real, fake, good, bad, ugly, whatever. Yet, for the user the challenge to stay truly updated has never been this tough. Sifting the hay from the chaff is both computationally & practically hard!

There's a real need to automatically detect, flag & block misleading information from propagating. Though at the moment the technology doesn't exist, offerings are very likely to come up soon & get refined over time to nail the problem well enough. While we await breakthroughs on that front, for now the best bet is to depend on traditional human judgment.

- Make use of a set (not one or two) of trusted media sources, that employ professionals & expert journalists. Rely on their expertise to do the job of collecting & presenting the facts correctly. Assuming (hopefully) that these people/ organizations behave professionally, the information that gets through to these sources would be far better.

- Fact check details across the entire set of sources. This helps mitigate against a temporary (or permanent) deliberate/ inadvertent faltering, manipulation, influence, etc. of one odd sources. Use the set as a weak quorum that collectively highlights & prevents propagation of misinformation. Even if a few members there falter, unlikely that all would. The majority would not allow the fakes to make it into their respective channels.

- Challenging part being if a certain piece shows up as a breaking news on one channel & not the others. Could default to labeling it as fake/ unverified, with the following considerations for the news piece:

 Case 1: Turns out fake, doesn't show up on the other sources
     => Remains Correctly Marked Fake


 Case 2: Turns out to be genuine & eventually shows up on other/ majority sources
    => Gets Correctly Marked True
 

 Case 3: Is genuine, but acquired via some form of journalistic brilliance (expose, criminal/ undercover journalism, etc.) that can't be re-run, or is about a region/ issue largely ignored by the mainstream media unwilling to do the verification, or for some other reason can't be verified
    => Remains Incorrectly Marked Fake


Case 3 is obviously the toughest to crack. While some specifics maybe impossible to verify, other allied details could be easier to access & verify. Once some other media groups (beyond the one that reported) get involved in the secondary verification there is some likelihood of true facts emerging.

For those marginalized there are social groups & organizations, governmental & non-governmental that have some reports published on issues from ground zero. At the same time, as connectivity improves, citizens themselves would be able to bring forth local issues onto national & international platforms. In the interim, these will have to be relied upon until commercial interests & mainstream media eventually bring the marginalized into the folds. Nonetheless, much more thought & effort is needed to check the spread of misinformation.

Finally, here's a little script 'op-on.sh' / 'op-on.py' (works/ tested on *nix desktop), to look up opinions (buzz) on any given topic across a set of media agencies, of repute. Alternatively, a bookmarklet could be added to the browser, which would enable looking up the opinions across the sites. The op-on bookmarklet (tested on Firefox & Chrome) can be installed by right clicking & adding as a bookmark in the browser (or by copying the script into the url of a new bookmark). Pop-up blockers in the browser will need to be temporarily disabled (e.g. by clicking allow pop-ups in Firefox) for the script to work.

The set of media agencies that these scripts look up include groups like TOI, IE, India Today, Times Now, WION, Ndtv, Hindu, HT, Print, Quint, Week, Reuters, BBC, and so on. This might help the curious human reader to look up all those sources for opinions on any topic of interest.

Update 1 (16-Sep-19): Some interesting developments:

Friday, March 8, 2019

Secure DNS

Domain Name System (DNS) is one of the backbones of the internet. DNS helps translate a URL (e.g. blahblah.com) to its corresponding IP address (i.e. 10.02.93.54). Thanks to the DNS human's can access the internet via human friendly URL, than having to remember & punch in numeric IP. So much simpler to say "look it up on Google", than saying "look it up on 172.168...".

Working of DNS

The working of the DNS involves looking up DNS servers spread out over the internet. When a user enters a URL in the browser, the address resolver in their system looks up the DNS servers configured at their system (router/ network, ISP, etc.) for the corresponding IP address. The resolver recursively looks up the Root DNS server, then the top level domain (.com, .in), then second level domain (Google, Yahoo, etc.) (the Authoritative server for the domain), & from it finally the sub-domain (www, mail, etc.) to arrive at the corresponding IP address.

DNS requests are typically made in plain text via UDP or TCP. In addition to destination URL, these requests also carry enough source identifiable information with them. Together with the recursive nature of the lookups via several intermediaries, this makes DNS requests vulnerable to being observed & tracked. The response could even be spoofed via a malicious intermediary that changes the IP address & direct the user to a scam site.

DNS over HTTPS (DoH)

A very recent development has been the introduction of DNS over HTTPS (DoH) in Firefox. HTTPS is the standard protocol used for end-to-end encryption of traffic over the internet. This prevents eavesdropping of the traffic between the client & the server by any intermediary.

To further secure the DNS request, DoH also brings in the concept of Trusted Recursive Resolvers (TRR). The TRR is trusted & of repute, & provides guarantees of privacy & security to the user. The default for Firefox is Cloudflare, though other TRRs are available for the user to choose from. Sadly though, OpenDNS isn't onboard with DoH or TRR, instead has its own offerings called DNSCrypt. Hope to see more convergence as adoption of these technologies improves in the future.

Setting-up DoH with Firefox (ver. 65.0) requires going to Preferences > Network Setting & checking "Enable DNS over HTTPS", with the default Cloudflare TRR. Alternatively, the flags "network.trr.mode" & "network.trr.uri" could be set-up via the about:config.

To confirm if the set-up is correct, navigate to the Cloudflare test my browser page & validate. This should result in successful check marks in green against "Secure DNS" & "TLS 1.3". Some further set-ups may be needed in case the other two checks fail.

For DNSSEC a DNSSEC compatible DNS server will need to be added. Pick Cloudflare DNS, Google DNS or any other from the DNS severs list. On the other hand, for Encrypted SNI indicator, the flag "network.security.esni.enabled" can be enabled. Since ESNI is still at an experimental stage, there could be changes (or bugs) that get uncovered & resolved in the future.

Enabling at a Global Level

The DoH setting discussed here is limited to Firefox. DNS lookups done outside of Firefox from any other browser, application or OS is unable to leverage DoH. DoH at the global/ OS level could be set-up via proxies. Given that DoH is over HTTPS, primarily a high level protocol for secure transfer of Hyper Text Documents, it maybe preferable securing DNS directly over TLS protocol.

In this regard DNS over TLS (DoT) is being developed. Ubuntu ver.18.0 & some Linux flavours offer DoT support experimentally. While DoT has some catching up to do viz-a-vis DoH, raging debates are continuing regarding the merits & demerits of the two options for securing DNS requests. Over time we can hope for the gaps & issues to be resolved, & far better privacy & security offered to the end-user.

Update 1 (25-Feb-21) 
Reference links regarding enabling DoH across devices.

Enabling In Android:
- https://android.stackexchange.com/questions/214574/how-do-i-enable-dns-over-https-on-firefox-for-android 
(In addition to the various network.trr.* settings to use OpenDns Family Shield DoH, additionally lookup the IP for the domain name "doh.familyshield.opendns.com" & set that value to network.trr.uri) 
- https://blog.cloudflare.com/enable-private-dns-with-1-1-1-1-on-android-9-pie/

DoH Providers:
- https://github.com/curl/curl/wiki/DNS-over-HTTPS (Cloudflare also offers a Family shield/ filter)
- https://support.opendns.com/hc/en-us/articles/360038086532-Using-DNS-over-HTTPS-DoH-with-OpenDNS
- https://support.mozilla.org/en-US/kb/dns-over-https-doh-faqs

Tuesday, March 5, 2019

Human Design

"What a blessing (mercy) it would be if we could open and shut our ears as easily as we open and shut our eyes. - Georg C. Lichtenberg"

So true. Many an offensive situations could be diffused by simply dropping down the earlids. In a hyper-noisy nation like ours where the chatter never dies down, earmarked (sic!) noise free zones (around hospitals, schools, etc.) wouldn't exist. There could even be earlid-downed marches to protest against the high decibel rants pushed at us from all nooks & corners of the planet.

Perhaps Kikazaru/ Mikazaru, the first macaque who prescribed to us hear no evil, would be seen jumping around like never before. Only to be reminded the very next minute by his two wise buddies of its futility. And how their respective advices have been largely ignored despite there being lids for the eyes & the mouth. Finally, we would perhaps be able to truly experience the world in the way that people who can't hear experience it, even today. So yes, I agree with Mr. Lichtenberg that it would be a real blessing!

In that same spirit, we could also do with another design change, one that might already exist in a parallel universe somewhere. Would be nice to shift humans from a 4-hourly hunger cycle to a more pragmatic 4-monthly one. No getting hungry every few hours, no snacking, no gorging, no fun (seriously)?

There'd instead be a triannual feasting day for the individual. That would be the day to celebrate, bigger than any birthday or anniversary combined. The person concerned would probably down a few hundred kilos of their favorite gourmets. Gastronomic desires fulfilled like there's no tomorrow. There really wouldn't be one for the next four months. Guests meanwhile, would be making merry - singing, dancing, & everything else - awaiting their day of feasting. 

There are stories about Indian mystics & sadhus who achieved a state of being, or were just built differently, where they didn't need any food for days together. But they seem to have gone extinct, save for some hunger artists. On the other hand, many animal species are known to feed in cycles with long fasting breaks in between. The camel for instance carries a special biological organ (the hump) to store food (fat) reserves, & can go without food & water for weeks together. In nature the concept is not so rare, a few hundred genes at play that's all.     

Yet, the impact from a triannual feeding cycle to our social structures would be unimaginable. For instance the movie screenplay where the protagonist is complaining about the paapi pet (evil stomach) would simply be gone. Hunger, malnourishment, perhaps even poverty would be over. Or is that taking it too far? Newer enterprises would no doubt emerge that would work their way to profitability around the altered version of this fundamental base human need for food. In any case, there would be a paradigm shift on our social, economic & policy frameworks all over. Our entire existence would be markedly different, & hopefully better.

Monday, March 4, 2019

AB Testing To Establish Causation

A/B testing is a form of testing performed to compare the effectiveness of different versions of a product with randomly distributed (i.i.d.) end-user groups. Each group gets to use only one version of the product. Assignment of any particular user to a specific group is done at random, without any biases, etc. User composition of the different groups are assumed to be similar, to the extent that switching the version of the products between any two groups at random would make no difference to the overall results of the test.

A/B testing is an example of a simple randomized control trial . This sort of tests help establish causal relationship between a particular element of change & the measured outcome. The element of change could be something like change of location of certain elements of a page, adding/ removing a feature of the product, conversion rate, and so on. The outcome could be to measure the impact on additional purchase, clicks, time of engagement, etc.

During the testing period, users are at randomly assigned to the different groups. The key aspect of the test is the random assignment of users being done in real-time of otherwise similar users, so that no other unknown confounding factors (demographics, seasonality, tech. competence, background, etc.) have no impact on the test objective. When tested with a fairly large number of users, almost every group will end-up with a good sample of users that are representative of the underlying population.

One of the groups (the control group) is shown the baseline version (maybe an older version of an existing product) of the product against which the alternate versions are compared. For every group the proportions of users that fulfilled the stated objective (purchased, clicked, converted, etc.) is captured.

The proportions (pi) are then used to compute the test statistics Z-value (assuming a large normally distributed user base), confidence intervals, etc. The null hypothesis being that the proportions (pi) are all similar/ not significantly different from the proportion of the control group (pc).

For the two version A/B test scenario

   Null hypothesis H0(p1 = pc) vs. the alternate hypothesis H1(p1 != pc).

   p1 = X1/ N1 (Test group)
   pc = Xc/ Nc (Control group)
   p_total = (X1 + Xc)/(N1 + Nc) (For the combined population) ,
            where X1, Xc: Number of users from groups test & control that fulfilled the objective,
                & N1, Nc: Total number of users from test & control group

  Z = Observed_Difference / Standard_Error
    = (p1 - pc)/ sqrt(p_total * (1 - p_total) * (1/N1 + 1/Nc))



The confidence level for the computed Z value is looked up in a normal table. Depending upon whether it is greater than 1.96 (or 2.56) the null hypothesis can be rejected with a confidence level of 95% (or 99%, or higher). This would indicate that the behavior of the test group is significantly different from the control group, the likely cause for which being the element of change brought in by the alternate version of the product. On the other hand, if the Z value is less than 1.96, the null hypothesis is not rejected & the element of change not considered to have made any significant impact on fulfillment of the stated objective.

Sunday, March 3, 2019

Mendelian Randomization

The second law of Mendelian inheritance is about independent assortment of alleles at the time of gamete (sperm & egg cells) formation. Therefore within the population of any given species, genetic variants are likely to be distributed at random, independent of any external factors. This insight forms the basis of Mendelian Randomization (MR) technique, typically applied in studies of epidemiology.

Studies of epidemiology try to establish the causal link (given some known association) between a particular risk factor & a disease. For e.g. smoking to cancer, blood pressure to stroke, etc. The association in many cases is found to be non-causal, or reverse causal, etc. Establishing the true effect becomes challenging due to the presence of confounding factors such as social, behavioral, environmental, physiological, etc. MR helps to tackle the confounding factors in such situations.

In MR, genetic variants (polymorphism) or genotype that have an effect similar to the risk factor/ exposure are identified. An additional constraint being that the genotype must not have any direct influence on the disease. Existence of genotype in the population is random, independent of any external influence. So presence (or absence) of disease within the population possessing the genotype, establishes (or refutes) that the risk factor/ effect is actually the cause for the disease. Several researches based on Mendelian randomization have been done  successfully.

Example 1: There could be a study to establish the causal relationship (given observed association) between raised cholesterol levels & chronic heart disease (CHD). Given the presence of several confounding factors such as age, physiology, smoking/ drinking habits, reverse causation (CHD causing raised cholesterol), etc., MR approach would be beneficial.

The approach would be to identify a genotype/ gene variant that is known to be linked to an increase in total cholesterol levels (but has no direct bearing on CHD). The propensity for CHD is tested for all subjects having the particular genotype, which if/ when found much higher than the general population (not possessing the gene variant) establishes that raised cholesterol levels have a causal relationship with CHD.

Instrumental Variables

MR is an application of the statistical technique of instrumental variables (IV) estimation. IV technique is also used to establish causal relationships in the presence of confounding factors.

When applied to regression models, IV technique is particularly beneficial when the explanatory variable (covariates) are correlated with the error term & give biased results. The choice of IV is such that it only induces changes in the explanatory variables, without having any independent effect on dependent variables. The IV must not be correlated to the error term. Selecting an IV that fulfills these criterias is largely done through an analytical process supported by some observational data, & by leveraging relevant priors about the field of study.

Equating MR to IV 
  • Risk Factor/ Effect = Explanatory Variable, 
  • Disease = Dependent Variable
  • Genotype = Instrument Variable 
Selection of genotype (IV) is based on prior knowledge of genes, from existing sources, literature, etc.

Saturday, March 2, 2019

Mendelian Inheritance

Gregor Mendel was a phenomenal scientist of the nineteenth century. Actually a Monk by profession he is considered the founder of modern genetics. In the 1850-60s, in the garden of his monastery he performed systematic hybridization experiments with the Pea plant over successive generations (second - F2, third - F3, etc.). Through these experiments he was able to conclude that traits get inherited by progenies in the form of discrete traits with a perfectly binary (either/ or) characteristic from the ancestors, as opposed to the then existing notion of a blending of traits. 

The following are the laws of Mendelian inheritance:
  • Law of Segregation:  During gamete (sperm or egg cell) formation, allele pairs separate out at random & only one of the alleles are carried by each gamete for each gene.
  • Law of Independent Assortment: Genes for different traits segregate independently of other pairs of alleles during the formation of gametes.
  • Law of Dominance: Some alleles are dominant while others are recessive. When present the dominant ones dominate.

Where,

 Human body is

  made of -> Cells                              
                                                                  (Building block of life, contain
                                                                       biomolecules such as Protein, DNA)
         containing --> Chromosomes            
                                                                 (One DNA Molecule + some proteins
                                                                       in the cell's nucleus, double helix
                                                                       shape, 46 in humans: 23 each
                                                                       inherited from either parent)
                             having     ---> Genes                     
                                                                  (Code to synthesize proteins
                                                                       & biocomponents, 2 Alleles or
                                                                        variant forms of a trait,
                                                                        one inherited per parent)

                                     that get coded to ----> Proteins   
                                                                 (Large biomolecules of amino acid
                                                                       chains, participate in vast variety
                                                                       of cellular processes & biological
                                                                       functions, metabolic reactions,
                                                                       signaling, etc. Exist within & get
                                                                       recycled by the cells)
                                                 
While the findings of Mendel were not popular initially, they were re-discovered almost half a century later. Though Mendel limited the experiments to traits that were governed by a single gene, the results were significant. These helped formed our understanding of genetics & heredity (genes) that continue to this day.

Sunday, February 24, 2019

Memory Gene

Have a conjecture that soon someone's going to be discovering a memory gene in the human genome. This doesn't seem to be have been done or published in any scientific literature so far. The concept of Genetic Memory from an online game is close, but then that's fiction.

The idea of the memory gene is that this gene on the human genome will act as a memory card. Whatever data the individual writes to the memory gene in their lifetime can be later retrieved by their progenies in their lifetimes. The space available in the memory gene would be small compared to what is available in the brain. If the disk space of the human brain is a Petabyte (= 10^12 Kilobytes), space of the memory gene would be about 10 Kilobytes. So very little can be written to the memory gene.

Unlike the brain to which every bit of information (visual, aural, textual, etc.) can be written to at will, writing to the memory gene would require some serious intent & need. Writing to the memory gene would be more akin to etching on a brass plate - strenuous but permanent. The intent would be largely triggered by the individual's experience(s), particularly ones that triggers strong emotions perhaps beneficial to survival. Once written to the memory gene this information would carry forward to the offsprings.

The human genome is known to have about 2% coding DNA & the rest non-coding DNA. The coding portions carry the instructions (genetic instructions) to synthesize proteins, while the purpose of the non-coding portions is not clearly known so far. The memory gene is likely to have a memory addressing mechanism, followed by the actual memory data stored in the large non coding portion.

At the early age of 2 or 3 years, when a good portion of brain development has happened in the individual, the memory recovery will begin. The mRNA, ribosome & the rest of the translation machinery will get to work in translating the genetic code from the memory gene to synthesize the appropriate proteins & biomolecules of the brain cell. In the process the memory data would be restored block by block in the brain. This would perhaps happen over a period of low activity such as night's sleep. The individual would later awaken to new transferred knowledge about unknowns, that would appear to be intuitive. Since the memory recovery would take place at an early age, conflicts in experiences between the individual and the ancestor wouldn't happen.
  
These are some basic features of the very complex memory gene. As mentioned earlier, this is purely a conjecture and shouldn't be taken otherwise. Look forward to exploring genuine scientific researches in this space as they get formalized & shared.

Update 1 (27-Aug-19):
For some real research take a look at the following:

 =>> Arch Gene:
 Neuronal gene Arc required for synaptic plasticity and cognition. Resemble  retroviral/retrotransposon in their transfer between cells followed by activity dependent translation. These studies throw light on a completely new way through which neurons could send genetic information to one another. More details from the 2018 publication available here:
  •    https://www.nih.gov/news-events/news-releases/memory-gene-goes-viral
  •    https://www.cell.com/cell/comments/S0092-8674(17)31504-0
  •    https://www.ncbi.nlm.nih.gov/pubmed/29328915/

  =>> Memories pass between generations (2013):
 When grand-parent generation mice are taught to fear an odor, their next two generations (children & grand-children) retain the fear:
  •    https://www.bbc.com/news/health-25156510
  •    https://www.nature.com/articles/nn.3594
  •    https://www.nature.com/articles/nn.3603

 =>> Epigenetics & Cellular Memory (1970s onwards):
  •    https://en.wikipedia.org/w/index.php?title=Genetic_memory_(biology)&oldid=882561903
  •    https://en.wikipedia.org/w/index.php?title=Genomic_imprinting&oldid=908987981

  =>> Psychology - Genetic Memory (1940s onwards):Largely focused on the phenomenon of knowing things that weren't explicitly learned by an individual:
  •    https://blogs.scientificamerican.com/guest-blog/genetic-memory-how-we-know-things-we-never-learned/
  •    https://en.wikipedia.org/w/index.php?title=Genetic_memory_(psychology)&oldid=904552075
  •    http://www.bahaistudies.net/asma/The-Concept-of-the-Collective-Unconscious.pdf
  •    https://en.wikipedia.org/wiki/Collective_unconscious

Friday, February 22, 2019

Taller Woman Dance Partner

In the book Think Stats, there's an exercise to work out the percentage of dance couples where the woman is taller, when paired up at random. Mean heights (cm) & their variances are given as 178 & 59.4 for men, & 163 & 52.8 for women.

The two height distributions for men & women can be assumed Normal. The solution is to work out the total area under the two curves where the condition height_woman > height_men holds. This will need to be done for every single height point (h), i.e.  under the  entire spread of the two curves (-∞, ∞). In other words, the integral of the height curve for men from (-∞, h) having height < h,  multiplied by the integral of the height curve for women from (h, ∞) having height > h.

There are empirical solutions where N data points are drawn from two Normal distributions with appropriate mean & sd (variance) for men & women. These are paired up at random (averaged over k-runs) to compute the number of pairs with taller women.

The area computation can also be approximated by limiting to the ±3 sd range (includes 99.7%) of height values on either side of the two curves (140cm to 185cm). Then by sliding along the height values (h) starting from h=185 down in steps of size (s) s=0.5 or so, compute z at each point:

z = (h - m)/ sd, where m & sd are the corresponding mean & standard deviation of the two curves.

Refer to the one-sided standard normal value to compute percentage women to the right of h (>z) & percentage of men to left of h (<z). The product of the two is the corresponding value at point h. A summation of the same over all h results in the final percentage value. The equivalent solution using NORMDIST yields a likelihood of ~7.5%, slightly below expected (due to the coarse step size of 0.5). 

C1 plots the percent. of women to the right & percent. of men to the left at different height values. C2 is the likelihood of seeing a couple with a taller woman within each step window of size 0.5. Interestingly, the peak in C2 is between heights 172-172.5, about 1.24 sd from women's mean (163) & 0.78 sd from men's mean (178). The spike at the end of the curve C2 at point 185.5 is for the likelihood of all heights > 185.5, i.e. the window (185.5,∞).

Playing around with different height & variance values yields other final results. For instance at the moment the two means are separated by 2*sd. If we reduced this to 1*sd the mean height of women (or men) to about 170.5 cm, the final likelihood jumps to about 23%. This is understandable since the population now has far more taller women. The height variance for men is more than women, setting them to identical values 52.8 (fewer shorter men) results in lowering of the percentage to about 6.9%, vs. setting them to 59.4 (more taller women) increases the percentage to 8.1%.





Sample data points from Confidence_Interval_Heights.ods worksheet:


Point (p) Women
Men Likelihood (l)

z_f
Using p
r_f =
% to Right
r_f_delta =
% in between

z_m
Using p
l_m=
% to Left
l = l_m
* r_f_delta
185.5 3.0964606 0.0009792 0.0009792
0.9731237 0.8347541 0.0008174
185 3.0276504 0.0012323 0.0002531
0.9082488 0.8181266 0.0002071
184.5 2.9588401 0.0015440 0.0003117
0.8433739 0.8004903 0.0002495
184 2.8900299 0.0019260 0.0003820
0.7784989 0.7818625 0.0002987
183.5 2.8212196 0.0023921 0.0004660
0.7136240 0.7622702 0.0003553
183 2.7524094 0.0029579 0.0005659
0.6487491 0.7417497 0.0004197
182.5 2.6835992 0.0036417 0.0006838
0.5838742 0.7203475 0.0004926
182 2.6147889 0.0044641 0.0008224
0.5189993 0.6981194 0.0005741

...