<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Mike's Blog</title><link>https://mikeogilvy.github.io/blog/</link><description>Recent content on Mike's Blog</description><generator>Hugo</generator><language>zh-cn</language><lastBuildDate>Sat, 28 Mar 2026 02:24:29 +0800</lastBuildDate><atom:link href="https://mikeogilvy.github.io/blog/index.xml" rel="self" type="application/rss+xml"/><item><title>ML-2 Note: Linear Models for Classification and GLMs</title><link>https://mikeogilvy.github.io/blog/posts/ml/ml---2/</link><pubDate>Sat, 28 Mar 2026 02:24:29 +0800</pubDate><guid>https://mikeogilvy.github.io/blog/posts/ml/ml---2/</guid><description>&lt;h1 id="ml-2-note-linear-models-for-classification-and-glms"&gt;ML-2 Note: Linear Models for Classification and GLMs&lt;/h1&gt;
&lt;h2 id="1-why-classification-uses-logistic-regression"&gt;1. Why Classification Uses Logistic Regression&lt;/h2&gt;
&lt;p&gt;In classification problems, the target variable is discrete.&lt;/p&gt;
&lt;p&gt;Binary classification example:&lt;/p&gt;
&lt;p&gt;$$y \in \left\{ 0,1 \right\}$$&lt;/p&gt;
&lt;p&gt;Given input features&lt;/p&gt;
&lt;p&gt;$$x \in \mathbb{R}^d$$&lt;/p&gt;
&lt;p&gt;we want to model&lt;/p&gt;
&lt;p&gt;$$P(y=1|x)$$&lt;/p&gt;
&lt;h3 id="problem-with-linear-regression"&gt;Problem with Linear Regression&lt;/h3&gt;
&lt;p&gt;A linear model predicts&lt;/p&gt;
&lt;p&gt;$$f(x) = w^T x$$&lt;/p&gt;
&lt;p&gt;but&lt;/p&gt;
&lt;p&gt;$$w^T x \in (-\infty, \infty)$$&lt;/p&gt;
&lt;p&gt;while probabilities must satisfy&lt;/p&gt;
&lt;p&gt;$$P(y=1|x) \in [0,1]$$&lt;/p&gt;
&lt;p&gt;Thus we need a function that maps&lt;/p&gt;</description></item><item><title>ML-1 Note: Supervised Learning; Linear Regression</title><link>https://mikeogilvy.github.io/blog/posts/ml/ml---1/</link><pubDate>Sun, 22 Mar 2026 03:32:29 +0800</pubDate><guid>https://mikeogilvy.github.io/blog/posts/ml/ml---1/</guid><description>&lt;h1 id="ml-1-note-supervised-learning-linear-regression"&gt;ML-1 Note: Supervised Learning; Linear Regression&lt;/h1&gt;
&lt;h2 id="1-basic-model-of-linear-regression"&gt;1. Basic Model of Linear Regression&lt;/h2&gt;
&lt;p&gt;Linear Regression is one of the simplest and most fundamental models in &lt;strong&gt;supervised learning&lt;/strong&gt;.&lt;br&gt;
The goal is to model the relationship between input features and a continuous target variable.&lt;/p&gt;
&lt;h3 id="model-form"&gt;Model Form&lt;/h3&gt;
&lt;p&gt;For a dataset with feature vector \(x\):&lt;/p&gt;
&lt;p&gt;\[
y = w^T x + b
\]&lt;/p&gt;
&lt;p&gt;or equivalently&lt;/p&gt;
&lt;p&gt;\[
\hat{y} = \theta^T x
\]&lt;/p&gt;
&lt;p&gt;where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;\(x\): input feature vector&lt;/li&gt;
&lt;li&gt;\(w\): weight vector&lt;/li&gt;
&lt;li&gt;\(b\): bias term&lt;/li&gt;
&lt;li&gt;\(\theta\): parameter vector (including bias)&lt;/li&gt;
&lt;li&gt;\(\hat{y}\): predicted value&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3 id="loss-function"&gt;Loss Function&lt;/h3&gt;
&lt;p&gt;The most common loss function for linear regression is &lt;strong&gt;Mean Squared Error (MSE)&lt;/strong&gt;.&lt;/p&gt;</description></item><item><title>Learning and Practice of Single-Cell Sequencing</title><link>https://mikeogilvy.github.io/blog/posts/single-cell/learning-and-practice-of-single-cell-sequencing/</link><pubDate>Mon, 09 Mar 2026 16:09:29 +0800</pubDate><guid>https://mikeogilvy.github.io/blog/posts/single-cell/learning-and-practice-of-single-cell-sequencing/</guid><description>&lt;div style="
text-align: justify;
line-height: 1.6;
hyphens: auto;
word-break: break-all;
max-width: 100%;
"&gt;
&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;Single-cell RNA sequencing (scRNA-seq) has become an essential technique for studying cellular heterogeneity and complex biological systems. During this winter research training, I systematically studied the general workflow and analytical principles of scRNA-seq based on Single Cell Best Practices and related resources, covering key steps such as data preprocessing, quality control, normalization, dimensionality reduction, clustering, and cell type annotation. To consolidate the knowledge, I first reproduced a complete analysis pipeline using publicly available immune cell data to familiarize myself with standard procedures and tools, then independently applied the same workflow to a publicly available human brain infection-related single-cell dataset, identifying distinct cell populations and infection-associated transcriptional changes across cell types. Overall, this training deepened my understanding of scRNA-seq data analysis, demonstrated the adaptability of standardized workflows to diverse biological contexts, and provided preliminary insights into cellular responses in infected human brain tissue as a foundation for further studies.&lt;/p&gt;</description></item><item><title>Hello World</title><link>https://mikeogilvy.github.io/blog/posts/hello-world/hello-world/</link><pubDate>Sat, 07 Mar 2026 16:09:29 +0800</pubDate><guid>https://mikeogilvy.github.io/blog/posts/hello-world/hello-world/</guid><description/></item><item><title>Friend Link</title><link>https://mikeogilvy.github.io/blog/friends/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://mikeogilvy.github.io/blog/friends/</guid><description>My Friends</description></item></channel></rss>