Apr 13 20:16:10.006368 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:16:10.006392 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:16:10.006400 kernel: BIOS-provided physical RAM map: Apr 13 20:16:10.006407 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 13 20:16:10.006412 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 13 20:16:10.006422 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 20:16:10.006428 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 13 20:16:10.006434 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 13 20:16:10.006440 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 20:16:10.006446 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 20:16:10.006451 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:16:10.006457 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 20:16:10.006463 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 13 20:16:10.006472 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:16:10.006479 kernel: NX (Execute Disable) protection: active Apr 13 20:16:10.006485 kernel: APIC: Static calls initialized Apr 13 20:16:10.006492 kernel: SMBIOS 2.8 present. Apr 13 20:16:10.006498 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 13 20:16:10.006504 kernel: Hypervisor detected: KVM Apr 13 20:16:10.006513 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:16:10.006520 kernel: kvm-clock: using sched offset of 5771338737 cycles Apr 13 20:16:10.006526 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:16:10.006533 kernel: tsc: Detected 1999.999 MHz processor Apr 13 20:16:10.006540 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:16:10.006546 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:16:10.006553 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 13 20:16:10.006560 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 20:16:10.006566 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:16:10.006576 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 13 20:16:10.006582 kernel: Using GB pages for direct mapping Apr 13 20:16:10.006589 kernel: ACPI: Early table checksum verification disabled Apr 13 20:16:10.006595 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 13 20:16:10.006601 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006608 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006614 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006621 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 13 20:16:10.006627 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006636 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006643 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006649 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006659 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 13 20:16:10.006666 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 13 20:16:10.006673 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 13 20:16:10.006682 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 13 20:16:10.006689 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 13 20:16:10.006696 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 13 20:16:10.006702 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 13 20:16:10.006709 kernel: No NUMA configuration found Apr 13 20:16:10.006716 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 13 20:16:10.006722 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 13 20:16:10.006729 kernel: Zone ranges: Apr 13 20:16:10.006739 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:16:10.006746 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:16:10.006753 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:16:10.006760 kernel: Movable zone start for each node Apr 13 20:16:10.006767 kernel: Early memory node ranges Apr 13 20:16:10.006773 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 20:16:10.006780 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 13 20:16:10.006787 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:16:10.006794 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 13 20:16:10.006801 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:16:10.006811 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 20:16:10.006818 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 13 20:16:10.006824 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:16:10.006831 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:16:10.006838 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:16:10.006845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:16:10.006852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:16:10.006859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:16:10.006866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:16:10.006875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:16:10.006882 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:16:10.006889 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:16:10.006896 kernel: TSC deadline timer available Apr 13 20:16:10.006903 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:16:10.006910 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:16:10.006917 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 20:16:10.006923 kernel: kvm-guest: setup PV sched yield Apr 13 20:16:10.006930 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 20:16:10.007131 kernel: Booting paravirtualized kernel on KVM Apr 13 20:16:10.007138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:16:10.007145 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:16:10.007152 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:16:10.007160 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:16:10.007166 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:16:10.007173 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:16:10.007180 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:16:10.007188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:16:10.007198 kernel: random: crng init done Apr 13 20:16:10.007205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:16:10.007212 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:16:10.007219 kernel: Fallback order for Node 0: 0 Apr 13 20:16:10.007226 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 13 20:16:10.009417 kernel: Policy zone: Normal Apr 13 20:16:10.009425 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:16:10.009432 kernel: software IO TLB: area num 2. Apr 13 20:16:10.009444 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227300K reserved, 0K cma-reserved) Apr 13 20:16:10.009451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:16:10.009457 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:16:10.009464 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:16:10.009471 kernel: Dynamic Preempt: voluntary Apr 13 20:16:10.009478 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:16:10.009485 kernel: rcu: RCU event tracing is enabled. Apr 13 20:16:10.009492 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:16:10.009499 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:16:10.009508 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:16:10.009515 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:16:10.009522 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:16:10.009529 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:16:10.009535 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:16:10.009542 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:16:10.009548 kernel: Console: colour VGA+ 80x25 Apr 13 20:16:10.009555 kernel: printk: console [tty0] enabled Apr 13 20:16:10.009562 kernel: printk: console [ttyS0] enabled Apr 13 20:16:10.009571 kernel: ACPI: Core revision 20230628 Apr 13 20:16:10.009578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:16:10.009584 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:16:10.009591 kernel: x2apic enabled Apr 13 20:16:10.009607 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:16:10.009616 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 20:16:10.009623 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 20:16:10.009630 kernel: kvm-guest: setup PV IPIs Apr 13 20:16:10.009637 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:16:10.009644 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:16:10.009651 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Apr 13 20:16:10.009658 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:16:10.009667 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:16:10.009674 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:16:10.009681 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:16:10.009688 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:16:10.009695 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:16:10.009705 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 13 20:16:10.009712 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:16:10.009719 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:16:10.009726 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 13 20:16:10.009734 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 13 20:16:10.009741 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:16:10.009748 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 13 20:16:10.009755 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:16:10.009764 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:16:10.009772 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:16:10.009778 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:16:10.009786 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:16:10.009793 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:16:10.009800 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:16:10.009807 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 13 20:16:10.009814 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 13 20:16:10.009821 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:16:10.009831 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:16:10.009838 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:16:10.009845 kernel: landlock: Up and running. Apr 13 20:16:10.009852 kernel: SELinux: Initializing. Apr 13 20:16:10.009859 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.009866 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.009874 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 13 20:16:10.009881 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:16:10.009888 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:16:10.009898 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:16:10.009905 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:16:10.009912 kernel: ... version: 0 Apr 13 20:16:10.009920 kernel: ... bit width: 48 Apr 13 20:16:10.009927 kernel: ... generic registers: 6 Apr 13 20:16:10.009933 kernel: ... value mask: 0000ffffffffffff Apr 13 20:16:10.009941 kernel: ... max period: 00007fffffffffff Apr 13 20:16:10.009948 kernel: ... fixed-purpose events: 0 Apr 13 20:16:10.009954 kernel: ... event mask: 000000000000003f Apr 13 20:16:10.009964 kernel: signal: max sigframe size: 3376 Apr 13 20:16:10.009971 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:16:10.009978 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:16:10.009985 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:16:10.009992 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:16:10.009999 kernel: .... node #0, CPUs: #1 Apr 13 20:16:10.010006 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:16:10.010013 kernel: smpboot: Max logical packages: 1 Apr 13 20:16:10.010020 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 13 20:16:10.010030 kernel: devtmpfs: initialized Apr 13 20:16:10.010037 kernel: x86/mm: Memory block size: 128MB Apr 13 20:16:10.010044 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:16:10.010051 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:16:10.010058 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:16:10.010065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:16:10.010072 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:16:10.010079 kernel: audit: type=2000 audit(1776111369.082:1): state=initialized audit_enabled=0 res=1 Apr 13 20:16:10.010086 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:16:10.010096 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:16:10.010103 kernel: cpuidle: using governor menu Apr 13 20:16:10.010110 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:16:10.010117 kernel: dca service started, version 1.12.1 Apr 13 20:16:10.010124 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 20:16:10.010131 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 20:16:10.010138 kernel: PCI: Using configuration type 1 for base access Apr 13 20:16:10.010145 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:16:10.010153 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:16:10.010162 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:16:10.010169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:16:10.010176 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:16:10.010183 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:16:10.010190 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:16:10.010197 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:16:10.010204 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:16:10.010211 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:16:10.010218 kernel: ACPI: Interpreter enabled Apr 13 20:16:10.010249 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:16:10.010257 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:16:10.010264 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:16:10.010271 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:16:10.010278 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:16:10.010285 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:16:10.010473 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:16:10.010614 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:16:10.010749 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:16:10.010759 kernel: PCI host bridge to bus 0000:00 Apr 13 20:16:10.010888 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:16:10.011203 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:16:10.013362 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:16:10.013486 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 13 20:16:10.013602 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 20:16:10.013724 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 13 20:16:10.013838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:16:10.014168 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:16:10.014337 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 20:16:10.014465 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 13 20:16:10.014623 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 13 20:16:10.014756 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 13 20:16:10.014879 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:16:10.015019 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 13 20:16:10.018903 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:16:10.019056 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 13 20:16:10.019187 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 13 20:16:10.019364 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:16:10.019499 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:16:10.019622 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 13 20:16:10.019744 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 13 20:16:10.019868 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 13 20:16:10.020000 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:16:10.020125 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:16:10.020314 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:16:10.020450 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 13 20:16:10.020572 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 13 20:16:10.020703 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:16:10.020826 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 20:16:10.020835 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:16:10.020843 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:16:10.020850 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:16:10.020864 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:16:10.020870 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:16:10.020877 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:16:10.020884 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:16:10.020891 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:16:10.020899 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:16:10.020906 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:16:10.020913 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:16:10.020920 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:16:10.020930 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:16:10.020937 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:16:10.020943 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:16:10.020950 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:16:10.020957 kernel: iommu: Default domain type: Translated Apr 13 20:16:10.020964 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:16:10.020971 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:16:10.020979 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:16:10.020986 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 13 20:16:10.020996 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 13 20:16:10.021119 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:16:10.022625 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:16:10.022761 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:16:10.022771 kernel: vgaarb: loaded Apr 13 20:16:10.022779 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:16:10.022786 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:16:10.022794 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:16:10.022806 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:16:10.022813 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:16:10.022820 kernel: pnp: PnP ACPI init Apr 13 20:16:10.022958 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 20:16:10.022969 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:16:10.022976 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:16:10.022984 kernel: NET: Registered PF_INET protocol family Apr 13 20:16:10.022991 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:16:10.023002 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:16:10.023009 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:16:10.023016 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:16:10.023023 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:16:10.023030 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:16:10.023037 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.023044 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.023051 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:16:10.023058 kernel: NET: Registered PF_XDP protocol family Apr 13 20:16:10.023436 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:16:10.023554 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:16:10.023668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:16:10.023780 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 13 20:16:10.023919 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 20:16:10.024221 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 13 20:16:10.025429 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:16:10.025437 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:16:10.025449 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 13 20:16:10.025456 kernel: Initialise system trusted keyrings Apr 13 20:16:10.025464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:16:10.025471 kernel: Key type asymmetric registered Apr 13 20:16:10.025478 kernel: Asymmetric key parser 'x509' registered Apr 13 20:16:10.025485 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:16:10.025492 kernel: io scheduler mq-deadline registered Apr 13 20:16:10.025499 kernel: io scheduler kyber registered Apr 13 20:16:10.025506 kernel: io scheduler bfq registered Apr 13 20:16:10.025513 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:16:10.025524 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:16:10.025531 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:16:10.025538 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:16:10.025545 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:16:10.025553 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:16:10.025560 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:16:10.025567 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:16:10.025574 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:16:10.025716 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:16:10.025843 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:16:10.025962 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:16:09 UTC (1776111369) Apr 13 20:16:10.026079 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:16:10.026088 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:16:10.026096 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:16:10.026103 kernel: Segment Routing with IPv6 Apr 13 20:16:10.026110 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:16:10.026122 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:16:10.026129 kernel: Key type dns_resolver registered Apr 13 20:16:10.026136 kernel: IPI shorthand broadcast: enabled Apr 13 20:16:10.026143 kernel: sched_clock: Marking stable (905005641, 343409177)->(1384594232, -136179414) Apr 13 20:16:10.026151 kernel: registered taskstats version 1 Apr 13 20:16:10.026158 kernel: Loading compiled-in X.509 certificates Apr 13 20:16:10.026165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:16:10.026172 kernel: Key type .fscrypt registered Apr 13 20:16:10.026180 kernel: Key type fscrypt-provisioning registered Apr 13 20:16:10.026190 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:16:10.026197 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:16:10.026205 kernel: ima: No architecture policies found Apr 13 20:16:10.026212 kernel: clk: Disabling unused clocks Apr 13 20:16:10.026220 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:16:10.026241 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:16:10.026249 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:16:10.026256 kernel: Run /init as init process Apr 13 20:16:10.026263 kernel: with arguments: Apr 13 20:16:10.026274 kernel: /init Apr 13 20:16:10.026281 kernel: with environment: Apr 13 20:16:10.026288 kernel: HOME=/ Apr 13 20:16:10.026316 kernel: TERM=linux Apr 13 20:16:10.026327 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:16:10.026336 systemd[1]: Detected virtualization kvm. Apr 13 20:16:10.026344 systemd[1]: Detected architecture x86-64. Apr 13 20:16:10.026352 systemd[1]: Running in initrd. Apr 13 20:16:10.026363 systemd[1]: No hostname configured, using default hostname. Apr 13 20:16:10.026370 systemd[1]: Hostname set to . Apr 13 20:16:10.026378 systemd[1]: Initializing machine ID from random generator. Apr 13 20:16:10.026386 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:16:10.026394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:16:10.026418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:16:10.026431 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:16:10.026439 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:16:10.026447 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:16:10.026455 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:16:10.026465 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:16:10.026473 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:16:10.026484 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:16:10.026492 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:16:10.026499 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:16:10.026507 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:16:10.026515 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:16:10.026523 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:16:10.026531 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:16:10.026539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:16:10.026547 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:16:10.026558 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:16:10.026566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:16:10.026574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:16:10.026582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:16:10.026590 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:16:10.026598 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:16:10.026606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:16:10.026614 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:16:10.026622 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:16:10.026632 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:16:10.026640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:16:10.026670 systemd-journald[178]: Collecting audit messages is disabled. Apr 13 20:16:10.026689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:16:10.026700 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:16:10.026711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:16:10.026719 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:16:10.026731 systemd-journald[178]: Journal started Apr 13 20:16:10.026747 systemd-journald[178]: Runtime Journal (/run/log/journal/f5d5f8ede68a4eda8ba6ca4df12fd3a6) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:16:10.029363 systemd-modules-load[179]: Inserted module 'overlay' Apr 13 20:16:10.125200 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:16:10.125289 kernel: Bridge firewalling registered Apr 13 20:16:10.057879 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 13 20:16:10.129953 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:16:10.131595 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:16:10.132836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:10.141377 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:16:10.144622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:16:10.152466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:16:10.154418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:16:10.170444 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:16:10.177070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:16:10.195973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:10.197084 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:16:10.198061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:16:10.204482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:16:10.214403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:16:10.216569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:16:10.229937 dracut-cmdline[209]: dracut-dracut-053 Apr 13 20:16:10.233950 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:16:10.249180 systemd-resolved[211]: Positive Trust Anchors: Apr 13 20:16:10.249198 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:16:10.249247 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:16:10.254135 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 13 20:16:10.255434 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:16:10.259025 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:16:10.320274 kernel: SCSI subsystem initialized Apr 13 20:16:10.330253 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:16:10.342263 kernel: iscsi: registered transport (tcp) Apr 13 20:16:10.365244 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:16:10.365304 kernel: QLogic iSCSI HBA Driver Apr 13 20:16:10.410998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:16:10.424371 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:16:10.453436 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:16:10.453490 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:16:10.455566 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:16:10.502267 kernel: raid6: avx2x4 gen() 29063 MB/s Apr 13 20:16:10.520261 kernel: raid6: avx2x2 gen() 26603 MB/s Apr 13 20:16:10.538423 kernel: raid6: avx2x1 gen() 22319 MB/s Apr 13 20:16:10.538458 kernel: raid6: using algorithm avx2x4 gen() 29063 MB/s Apr 13 20:16:10.560770 kernel: raid6: .... xor() 4536 MB/s, rmw enabled Apr 13 20:16:10.560819 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:16:10.583395 kernel: xor: automatically using best checksumming function avx Apr 13 20:16:10.719337 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:16:10.732584 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:16:10.739484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:16:10.756493 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 13 20:16:10.761309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:16:10.769444 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:16:10.786323 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 13 20:16:10.821422 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:16:10.827350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:16:10.905637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:16:10.913524 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:16:10.932425 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:16:10.938154 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:16:10.940659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:16:10.942554 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:16:10.949436 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:16:10.975148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:16:10.998333 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:16:11.001387 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:16:11.024546 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:16:11.025401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:16:11.205686 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:16:11.205730 kernel: AES CTR mode by8 optimization enabled Apr 13 20:16:11.205742 kernel: libata version 3.00 loaded. Apr 13 20:16:11.025638 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:11.190048 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:16:11.199684 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:16:11.199856 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:11.200855 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:16:11.241789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:16:11.280268 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:16:11.280502 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:16:11.283258 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:16:11.283477 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:16:11.286253 kernel: scsi host1: ahci Apr 13 20:16:11.289253 kernel: scsi host2: ahci Apr 13 20:16:11.291050 kernel: scsi host3: ahci Apr 13 20:16:11.294263 kernel: scsi host4: ahci Apr 13 20:16:11.297435 kernel: scsi host5: ahci Apr 13 20:16:11.300983 kernel: scsi host6: ahci Apr 13 20:16:11.301166 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 13 20:16:11.301179 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 13 20:16:11.301189 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 13 20:16:11.301198 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 13 20:16:11.301213 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 13 20:16:11.301223 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 13 20:16:11.395083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:11.406381 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:16:11.424032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:11.615249 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.615326 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.621374 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.621446 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.625330 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.628489 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.638738 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:16:11.664391 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 13 20:16:11.664653 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:16:11.669697 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:16:11.669953 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:16:11.680322 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:16:11.680347 kernel: GPT:9289727 != 167739391 Apr 13 20:16:11.680359 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:16:11.683316 kernel: GPT:9289727 != 167739391 Apr 13 20:16:11.686442 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:16:11.686457 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:11.690531 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:16:11.726274 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (467) Apr 13 20:16:11.733703 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:16:11.735837 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (463) Apr 13 20:16:11.745363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:16:11.752075 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:16:11.754360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:16:11.759797 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:16:11.770393 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:16:11.776531 disk-uuid[566]: Primary Header is updated. Apr 13 20:16:11.776531 disk-uuid[566]: Secondary Entries is updated. Apr 13 20:16:11.776531 disk-uuid[566]: Secondary Header is updated. Apr 13 20:16:11.782266 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:11.789264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:12.791360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:12.794762 disk-uuid[567]: The operation has completed successfully. Apr 13 20:16:12.845349 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:16:12.845469 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:16:12.855487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:16:12.861393 sh[581]: Success Apr 13 20:16:12.879254 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:16:12.919888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:16:12.929349 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:16:12.931557 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:16:12.953621 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:16:12.953665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:12.953678 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:16:12.957526 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:16:12.961702 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:16:12.970324 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:16:12.971735 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:16:12.973048 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:16:12.979371 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:16:12.983521 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:16:13.008914 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:13.008956 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:13.009171 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:16:13.016856 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:16:13.016889 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:16:13.030215 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:16:13.035396 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:13.042990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:16:13.051445 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:16:13.122283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:16:13.130465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:16:13.141537 ignition[687]: Ignition 2.19.0 Apr 13 20:16:13.141547 ignition[687]: Stage: fetch-offline Apr 13 20:16:13.141592 ignition[687]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:13.145812 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:16:13.141604 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:13.141734 ignition[687]: parsed url from cmdline: "" Apr 13 20:16:13.141742 ignition[687]: no config URL provided Apr 13 20:16:13.141751 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:16:13.141768 ignition[687]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:16:13.141778 ignition[687]: failed to fetch config: resource requires networking Apr 13 20:16:13.141981 ignition[687]: Ignition finished successfully Apr 13 20:16:13.164393 systemd-networkd[766]: lo: Link UP Apr 13 20:16:13.164402 systemd-networkd[766]: lo: Gained carrier Apr 13 20:16:13.166482 systemd-networkd[766]: Enumeration completed Apr 13 20:16:13.166573 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:16:13.167804 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:16:13.167809 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:16:13.169177 systemd[1]: Reached target network.target - Network. Apr 13 20:16:13.171330 systemd-networkd[766]: eth0: Link UP Apr 13 20:16:13.171335 systemd-networkd[766]: eth0: Gained carrier Apr 13 20:16:13.171343 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:16:13.180367 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:16:13.193763 ignition[770]: Ignition 2.19.0 Apr 13 20:16:13.193776 ignition[770]: Stage: fetch Apr 13 20:16:13.193928 ignition[770]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:13.193940 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:13.194020 ignition[770]: parsed url from cmdline: "" Apr 13 20:16:13.194024 ignition[770]: no config URL provided Apr 13 20:16:13.194030 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:16:13.194039 ignition[770]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:16:13.194056 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 13 20:16:13.194381 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:16:13.394552 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 13 20:16:13.394722 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:16:13.795299 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 13 20:16:13.795510 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:16:13.939316 systemd-networkd[766]: eth0: DHCPv4 address 172.234.25.54/24, gateway 172.234.25.1 acquired from 23.205.167.152 Apr 13 20:16:14.596320 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 13 20:16:14.693619 ignition[770]: PUT result: OK Apr 13 20:16:14.693717 ignition[770]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 13 20:16:14.809176 ignition[770]: GET result: OK Apr 13 20:16:14.809333 ignition[770]: parsing config with SHA512: ff32a685e3c18a097d0ecadc78a7a7d75524a59c8819cc162ada202b7727c80d59b172a462a1e78d47d2e9c6eee04fa3bad9a8f4f917e125cea541a88e4656b7 Apr 13 20:16:14.818206 unknown[770]: fetched base config from "system" Apr 13 20:16:14.818620 ignition[770]: fetch: fetch complete Apr 13 20:16:14.818221 unknown[770]: fetched base config from "system" Apr 13 20:16:14.818627 ignition[770]: fetch: fetch passed Apr 13 20:16:14.818250 unknown[770]: fetched user config from "akamai" Apr 13 20:16:14.818673 ignition[770]: Ignition finished successfully Apr 13 20:16:14.822404 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:16:14.830475 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:16:14.847736 ignition[777]: Ignition 2.19.0 Apr 13 20:16:14.847751 ignition[777]: Stage: kargs Apr 13 20:16:14.847939 ignition[777]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:14.847952 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:14.850611 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:16:14.848699 ignition[777]: kargs: kargs passed Apr 13 20:16:14.848748 ignition[777]: Ignition finished successfully Apr 13 20:16:14.862480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:16:14.875916 ignition[784]: Ignition 2.19.0 Apr 13 20:16:14.875933 ignition[784]: Stage: disks Apr 13 20:16:14.876132 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:14.879949 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:16:14.876146 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:14.903216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:16:14.877260 ignition[784]: disks: disks passed Apr 13 20:16:14.904532 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:16:14.877320 ignition[784]: Ignition finished successfully Apr 13 20:16:14.906457 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:16:14.908141 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:16:14.909742 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:16:14.917425 systemd-networkd[766]: eth0: Gained IPv6LL Apr 13 20:16:14.917935 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:16:14.937981 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:16:14.942009 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:16:14.950832 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:16:15.040260 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:16:15.040656 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:16:15.042085 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:16:15.053374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:16:15.057354 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:16:15.059429 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:16:15.060620 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:16:15.060647 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:16:15.068097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:16:15.070249 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (800) Apr 13 20:16:15.076255 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:15.076288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:15.076302 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:16:15.088137 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:16:15.088168 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:16:15.092536 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:16:15.096652 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:16:15.142185 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:16:15.148505 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:16:15.155391 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:16:15.162214 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:16:15.271512 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:16:15.278336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:16:15.281510 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:16:15.293881 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:16:15.297742 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:15.324340 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:16:15.329880 ignition[918]: INFO : Ignition 2.19.0 Apr 13 20:16:15.329880 ignition[918]: INFO : Stage: mount Apr 13 20:16:15.329880 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:15.329880 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:15.329880 ignition[918]: INFO : mount: mount passed Apr 13 20:16:15.329880 ignition[918]: INFO : Ignition finished successfully Apr 13 20:16:15.332997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:16:15.339391 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:16:16.046402 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:16:16.061271 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Apr 13 20:16:16.068382 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:16.068455 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:16.068471 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:16:16.077904 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:16:16.077939 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:16:16.080534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:16:16.111007 ignition[947]: INFO : Ignition 2.19.0 Apr 13 20:16:16.111007 ignition[947]: INFO : Stage: files Apr 13 20:16:16.113058 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:16.113058 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:16.113058 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:16:16.116196 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:16:16.116196 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:16:16.118340 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:16:16.119404 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:16:16.119404 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:16:16.119378 unknown[947]: wrote ssh authorized keys file for user: core Apr 13 20:16:16.122401 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:16:16.122401 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:16:16.424337 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:16:16.460123 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:16:16.460123 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:16:16.872340 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:16:17.186054 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:17.186054 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:16:17.188792 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:16:17.188792 ignition[947]: INFO : files: files passed Apr 13 20:16:17.188792 ignition[947]: INFO : Ignition finished successfully Apr 13 20:16:17.190680 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:16:17.220404 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:16:17.223397 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:16:17.232722 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:16:17.232857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:16:17.256461 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:16:17.256461 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:16:17.258881 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:16:17.260646 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:16:17.262118 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:16:17.268435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:16:17.293632 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:16:17.294311 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:16:17.296308 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:16:17.297652 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:16:17.299377 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:16:17.305567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:16:17.321896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:16:17.328385 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:16:17.341879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:16:17.343015 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:16:17.345127 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:16:17.346996 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:16:17.347130 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:16:17.348973 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:16:17.350072 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:16:17.351889 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:16:17.353532 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:16:17.355024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:16:17.356748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:16:17.358451 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:16:17.360153 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:16:17.361799 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:16:17.363503 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:16:17.365066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:16:17.365188 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:16:17.367075 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:16:17.368185 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:16:17.369721 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:16:17.369835 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:16:17.371488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:16:17.371600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:16:17.373741 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:16:17.373854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:16:17.374893 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:16:17.374993 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:16:17.391741 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:16:17.394449 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:16:17.395263 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:16:17.395423 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:16:17.399357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:16:17.399459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:16:17.410078 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:16:17.410210 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:16:17.415650 ignition[1000]: INFO : Ignition 2.19.0 Apr 13 20:16:17.417251 ignition[1000]: INFO : Stage: umount Apr 13 20:16:17.417251 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:17.417251 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:17.424297 ignition[1000]: INFO : umount: umount passed Apr 13 20:16:17.424297 ignition[1000]: INFO : Ignition finished successfully Apr 13 20:16:17.422847 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:16:17.422976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:16:17.423938 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:16:17.423993 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:16:17.428596 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:16:17.428656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:16:17.429845 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:16:17.429899 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:16:17.431609 systemd[1]: Stopped target network.target - Network. Apr 13 20:16:17.433073 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:16:17.433155 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:16:17.434838 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:16:17.437716 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:16:17.441458 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:16:17.442436 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:16:17.444108 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:16:17.468460 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:16:17.468518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:16:17.470453 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:16:17.470691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:16:17.472306 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:16:17.472363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:16:17.473830 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:16:17.473882 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:16:17.476056 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:16:17.477814 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:16:17.480486 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:16:17.481083 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:16:17.481423 systemd-networkd[766]: eth0: DHCPv6 lease lost Apr 13 20:16:17.481969 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:16:17.484690 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:16:17.484816 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:16:17.488330 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:16:17.488469 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:16:17.493644 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:16:17.493712 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:16:17.495813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:16:17.495877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:16:17.504908 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:16:17.505781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:16:17.505840 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:16:17.507586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:16:17.507639 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:16:17.509130 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:16:17.509183 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:16:17.510782 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:16:17.510833 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:16:17.512649 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:16:17.534501 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:16:17.534709 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:16:17.536727 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:16:17.536906 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:16:17.538822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:16:17.538897 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:16:17.540622 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:16:17.540667 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:16:17.542461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:16:17.542516 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:16:17.545313 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:16:17.545366 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:16:17.547421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:16:17.547473 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:17.554398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:16:17.555727 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:16:17.555785 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:16:17.558750 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:16:17.558815 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:16:17.561492 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:16:17.561552 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:16:17.562788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:16:17.562845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:17.564267 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:16:17.564396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:16:17.566206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:16:17.575431 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:16:17.583116 systemd[1]: Switching root. Apr 13 20:16:17.617413 systemd-journald[178]: Journal stopped Apr 13 20:16:10.006368 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:16:10.006392 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:16:10.006400 kernel: BIOS-provided physical RAM map: Apr 13 20:16:10.006407 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 13 20:16:10.006412 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 13 20:16:10.006422 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 20:16:10.006428 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 13 20:16:10.006434 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 13 20:16:10.006440 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 20:16:10.006446 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 20:16:10.006451 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:16:10.006457 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 20:16:10.006463 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 13 20:16:10.006472 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:16:10.006479 kernel: NX (Execute Disable) protection: active Apr 13 20:16:10.006485 kernel: APIC: Static calls initialized Apr 13 20:16:10.006492 kernel: SMBIOS 2.8 present. Apr 13 20:16:10.006498 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 13 20:16:10.006504 kernel: Hypervisor detected: KVM Apr 13 20:16:10.006513 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:16:10.006520 kernel: kvm-clock: using sched offset of 5771338737 cycles Apr 13 20:16:10.006526 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:16:10.006533 kernel: tsc: Detected 1999.999 MHz processor Apr 13 20:16:10.006540 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:16:10.006546 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:16:10.006553 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 13 20:16:10.006560 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 20:16:10.006566 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:16:10.006576 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 13 20:16:10.006582 kernel: Using GB pages for direct mapping Apr 13 20:16:10.006589 kernel: ACPI: Early table checksum verification disabled Apr 13 20:16:10.006595 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 13 20:16:10.006601 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006608 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006614 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006621 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 13 20:16:10.006627 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006636 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006643 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006649 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:16:10.006659 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 13 20:16:10.006666 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 13 20:16:10.006673 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 13 20:16:10.006682 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 13 20:16:10.006689 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 13 20:16:10.006696 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 13 20:16:10.006702 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 13 20:16:10.006709 kernel: No NUMA configuration found Apr 13 20:16:10.006716 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 13 20:16:10.006722 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 13 20:16:10.006729 kernel: Zone ranges: Apr 13 20:16:10.006739 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:16:10.006746 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:16:10.006753 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:16:10.006760 kernel: Movable zone start for each node Apr 13 20:16:10.006767 kernel: Early memory node ranges Apr 13 20:16:10.006773 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 20:16:10.006780 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 13 20:16:10.006787 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:16:10.006794 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 13 20:16:10.006801 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:16:10.006811 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 20:16:10.006818 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 13 20:16:10.006824 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:16:10.006831 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:16:10.006838 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:16:10.006845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:16:10.006852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:16:10.006859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:16:10.006866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:16:10.006875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:16:10.006882 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:16:10.006889 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:16:10.006896 kernel: TSC deadline timer available Apr 13 20:16:10.006903 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:16:10.006910 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:16:10.006917 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 20:16:10.006923 kernel: kvm-guest: setup PV sched yield Apr 13 20:16:10.006930 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 20:16:10.007131 kernel: Booting paravirtualized kernel on KVM Apr 13 20:16:10.007138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:16:10.007145 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:16:10.007152 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:16:10.007160 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:16:10.007166 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:16:10.007173 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:16:10.007180 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:16:10.007188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:16:10.007198 kernel: random: crng init done Apr 13 20:16:10.007205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:16:10.007212 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:16:10.007219 kernel: Fallback order for Node 0: 0 Apr 13 20:16:10.007226 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 13 20:16:10.009417 kernel: Policy zone: Normal Apr 13 20:16:10.009425 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:16:10.009432 kernel: software IO TLB: area num 2. Apr 13 20:16:10.009444 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227300K reserved, 0K cma-reserved) Apr 13 20:16:10.009451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:16:10.009457 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:16:10.009464 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:16:10.009471 kernel: Dynamic Preempt: voluntary Apr 13 20:16:10.009478 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:16:10.009485 kernel: rcu: RCU event tracing is enabled. Apr 13 20:16:10.009492 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:16:10.009499 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:16:10.009508 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:16:10.009515 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:16:10.009522 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:16:10.009529 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:16:10.009535 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:16:10.009542 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:16:10.009548 kernel: Console: colour VGA+ 80x25 Apr 13 20:16:10.009555 kernel: printk: console [tty0] enabled Apr 13 20:16:10.009562 kernel: printk: console [ttyS0] enabled Apr 13 20:16:10.009571 kernel: ACPI: Core revision 20230628 Apr 13 20:16:10.009578 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:16:10.009584 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:16:10.009591 kernel: x2apic enabled Apr 13 20:16:10.009607 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:16:10.009616 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 20:16:10.009623 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 20:16:10.009630 kernel: kvm-guest: setup PV IPIs Apr 13 20:16:10.009637 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:16:10.009644 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:16:10.009651 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Apr 13 20:16:10.009658 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:16:10.009667 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:16:10.009674 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:16:10.009681 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:16:10.009688 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:16:10.009695 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:16:10.009705 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 13 20:16:10.009712 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:16:10.009719 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:16:10.009726 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 13 20:16:10.009734 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 13 20:16:10.009741 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:16:10.009748 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 13 20:16:10.009755 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:16:10.009764 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:16:10.009772 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:16:10.009778 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:16:10.009786 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:16:10.009793 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:16:10.009800 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:16:10.009807 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 13 20:16:10.009814 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 13 20:16:10.009821 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:16:10.009831 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:16:10.009838 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:16:10.009845 kernel: landlock: Up and running. Apr 13 20:16:10.009852 kernel: SELinux: Initializing. Apr 13 20:16:10.009859 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.009866 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.009874 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 13 20:16:10.009881 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:16:10.009888 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:16:10.009898 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:16:10.009905 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:16:10.009912 kernel: ... version: 0 Apr 13 20:16:10.009920 kernel: ... bit width: 48 Apr 13 20:16:10.009927 kernel: ... generic registers: 6 Apr 13 20:16:10.009933 kernel: ... value mask: 0000ffffffffffff Apr 13 20:16:10.009941 kernel: ... max period: 00007fffffffffff Apr 13 20:16:10.009948 kernel: ... fixed-purpose events: 0 Apr 13 20:16:10.009954 kernel: ... event mask: 000000000000003f Apr 13 20:16:10.009964 kernel: signal: max sigframe size: 3376 Apr 13 20:16:10.009971 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:16:10.009978 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:16:10.009985 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:16:10.009992 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:16:10.009999 kernel: .... node #0, CPUs: #1 Apr 13 20:16:10.010006 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:16:10.010013 kernel: smpboot: Max logical packages: 1 Apr 13 20:16:10.010020 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 13 20:16:10.010030 kernel: devtmpfs: initialized Apr 13 20:16:10.010037 kernel: x86/mm: Memory block size: 128MB Apr 13 20:16:10.010044 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:16:10.010051 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:16:10.010058 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:16:10.010065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:16:10.010072 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:16:10.010079 kernel: audit: type=2000 audit(1776111369.082:1): state=initialized audit_enabled=0 res=1 Apr 13 20:16:10.010086 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:16:10.010096 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:16:10.010103 kernel: cpuidle: using governor menu Apr 13 20:16:10.010110 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:16:10.010117 kernel: dca service started, version 1.12.1 Apr 13 20:16:10.010124 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 20:16:10.010131 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 20:16:10.010138 kernel: PCI: Using configuration type 1 for base access Apr 13 20:16:10.010145 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:16:10.010153 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:16:10.010162 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:16:10.010169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:16:10.010176 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:16:10.010183 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:16:10.010190 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:16:10.010197 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:16:10.010204 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:16:10.010211 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:16:10.010218 kernel: ACPI: Interpreter enabled Apr 13 20:16:10.010249 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:16:10.010257 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:16:10.010264 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:16:10.010271 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:16:10.010278 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:16:10.010285 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:16:10.010473 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:16:10.010614 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:16:10.010749 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:16:10.010759 kernel: PCI host bridge to bus 0000:00 Apr 13 20:16:10.010888 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:16:10.011203 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:16:10.013362 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:16:10.013486 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 13 20:16:10.013602 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 20:16:10.013724 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 13 20:16:10.013838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:16:10.014168 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:16:10.014337 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 20:16:10.014465 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 13 20:16:10.014623 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 13 20:16:10.014756 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 13 20:16:10.014879 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:16:10.015019 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 13 20:16:10.018903 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:16:10.019056 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 13 20:16:10.019187 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 13 20:16:10.019364 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:16:10.019499 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:16:10.019622 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 13 20:16:10.019744 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 13 20:16:10.019868 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 13 20:16:10.020000 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:16:10.020125 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:16:10.020314 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:16:10.020450 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 13 20:16:10.020572 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 13 20:16:10.020703 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:16:10.020826 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 20:16:10.020835 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:16:10.020843 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:16:10.020850 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:16:10.020864 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:16:10.020870 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:16:10.020877 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:16:10.020884 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:16:10.020891 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:16:10.020899 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:16:10.020906 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:16:10.020913 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:16:10.020920 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:16:10.020930 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:16:10.020937 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:16:10.020943 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:16:10.020950 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:16:10.020957 kernel: iommu: Default domain type: Translated Apr 13 20:16:10.020964 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:16:10.020971 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:16:10.020979 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:16:10.020986 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 13 20:16:10.020996 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 13 20:16:10.021119 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:16:10.022625 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:16:10.022761 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:16:10.022771 kernel: vgaarb: loaded Apr 13 20:16:10.022779 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:16:10.022786 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:16:10.022794 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:16:10.022806 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:16:10.022813 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:16:10.022820 kernel: pnp: PnP ACPI init Apr 13 20:16:10.022958 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 20:16:10.022969 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:16:10.022976 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:16:10.022984 kernel: NET: Registered PF_INET protocol family Apr 13 20:16:10.022991 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:16:10.023002 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:16:10.023009 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:16:10.023016 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:16:10.023023 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:16:10.023030 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:16:10.023037 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.023044 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:16:10.023051 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:16:10.023058 kernel: NET: Registered PF_XDP protocol family Apr 13 20:16:10.023436 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:16:10.023554 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:16:10.023668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:16:10.023780 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 13 20:16:10.023919 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 20:16:10.024221 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 13 20:16:10.025429 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:16:10.025437 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:16:10.025449 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 13 20:16:10.025456 kernel: Initialise system trusted keyrings Apr 13 20:16:10.025464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:16:10.025471 kernel: Key type asymmetric registered Apr 13 20:16:10.025478 kernel: Asymmetric key parser 'x509' registered Apr 13 20:16:10.025485 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:16:10.025492 kernel: io scheduler mq-deadline registered Apr 13 20:16:10.025499 kernel: io scheduler kyber registered Apr 13 20:16:10.025506 kernel: io scheduler bfq registered Apr 13 20:16:10.025513 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:16:10.025524 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:16:10.025531 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:16:10.025538 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:16:10.025545 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:16:10.025553 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:16:10.025560 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:16:10.025567 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:16:10.025574 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:16:10.025716 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:16:10.025843 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:16:10.025962 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:16:09 UTC (1776111369) Apr 13 20:16:10.026079 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:16:10.026088 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:16:10.026096 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:16:10.026103 kernel: Segment Routing with IPv6 Apr 13 20:16:10.026110 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:16:10.026122 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:16:10.026129 kernel: Key type dns_resolver registered Apr 13 20:16:10.026136 kernel: IPI shorthand broadcast: enabled Apr 13 20:16:10.026143 kernel: sched_clock: Marking stable (905005641, 343409177)->(1384594232, -136179414) Apr 13 20:16:10.026151 kernel: registered taskstats version 1 Apr 13 20:16:10.026158 kernel: Loading compiled-in X.509 certificates Apr 13 20:16:10.026165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:16:10.026172 kernel: Key type .fscrypt registered Apr 13 20:16:10.026180 kernel: Key type fscrypt-provisioning registered Apr 13 20:16:10.026190 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:16:10.026197 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:16:10.026205 kernel: ima: No architecture policies found Apr 13 20:16:10.026212 kernel: clk: Disabling unused clocks Apr 13 20:16:10.026220 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:16:10.026241 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:16:10.026249 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:16:10.026256 kernel: Run /init as init process Apr 13 20:16:10.026263 kernel: with arguments: Apr 13 20:16:10.026274 kernel: /init Apr 13 20:16:10.026281 kernel: with environment: Apr 13 20:16:10.026288 kernel: HOME=/ Apr 13 20:16:10.026316 kernel: TERM=linux Apr 13 20:16:10.026327 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:16:10.026336 systemd[1]: Detected virtualization kvm. Apr 13 20:16:10.026344 systemd[1]: Detected architecture x86-64. Apr 13 20:16:10.026352 systemd[1]: Running in initrd. Apr 13 20:16:10.026363 systemd[1]: No hostname configured, using default hostname. Apr 13 20:16:10.026370 systemd[1]: Hostname set to . Apr 13 20:16:10.026378 systemd[1]: Initializing machine ID from random generator. Apr 13 20:16:10.026386 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:16:10.026394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:16:10.026418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:16:10.026431 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:16:10.026439 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:16:10.026447 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:16:10.026455 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:16:10.026465 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:16:10.026473 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:16:10.026484 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:16:10.026492 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:16:10.026499 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:16:10.026507 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:16:10.026515 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:16:10.026523 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:16:10.026531 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:16:10.026539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:16:10.026547 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:16:10.026558 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:16:10.026566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:16:10.026574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:16:10.026582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:16:10.026590 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:16:10.026598 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:16:10.026606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:16:10.026614 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:16:10.026622 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:16:10.026632 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:16:10.026640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:16:10.026670 systemd-journald[178]: Collecting audit messages is disabled. Apr 13 20:16:10.026689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:16:10.026700 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:16:10.026711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:16:10.026719 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:16:10.026731 systemd-journald[178]: Journal started Apr 13 20:16:10.026747 systemd-journald[178]: Runtime Journal (/run/log/journal/f5d5f8ede68a4eda8ba6ca4df12fd3a6) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:16:10.029363 systemd-modules-load[179]: Inserted module 'overlay' Apr 13 20:16:10.125200 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:16:10.125289 kernel: Bridge firewalling registered Apr 13 20:16:10.057879 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 13 20:16:10.129953 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:16:10.131595 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:16:10.132836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:10.141377 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:16:10.144622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:16:10.152466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:16:10.154418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:16:10.170444 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:16:10.177070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:16:10.195973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:10.197084 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:16:10.198061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:16:10.204482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:16:10.214403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:16:10.216569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:16:10.229937 dracut-cmdline[209]: dracut-dracut-053 Apr 13 20:16:10.233950 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:16:10.249180 systemd-resolved[211]: Positive Trust Anchors: Apr 13 20:16:10.249198 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:16:10.249247 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:16:10.254135 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 13 20:16:10.255434 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:16:10.259025 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:16:10.320274 kernel: SCSI subsystem initialized Apr 13 20:16:10.330253 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:16:10.342263 kernel: iscsi: registered transport (tcp) Apr 13 20:16:10.365244 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:16:10.365304 kernel: QLogic iSCSI HBA Driver Apr 13 20:16:10.410998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:16:10.424371 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:16:10.453436 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:16:10.453490 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:16:10.455566 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:16:10.502267 kernel: raid6: avx2x4 gen() 29063 MB/s Apr 13 20:16:10.520261 kernel: raid6: avx2x2 gen() 26603 MB/s Apr 13 20:16:10.538423 kernel: raid6: avx2x1 gen() 22319 MB/s Apr 13 20:16:10.538458 kernel: raid6: using algorithm avx2x4 gen() 29063 MB/s Apr 13 20:16:10.560770 kernel: raid6: .... xor() 4536 MB/s, rmw enabled Apr 13 20:16:10.560819 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:16:10.583395 kernel: xor: automatically using best checksumming function avx Apr 13 20:16:10.719337 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:16:10.732584 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:16:10.739484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:16:10.756493 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 13 20:16:10.761309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:16:10.769444 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:16:10.786323 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 13 20:16:10.821422 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:16:10.827350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:16:10.905637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:16:10.913524 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:16:10.932425 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:16:10.938154 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:16:10.940659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:16:10.942554 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:16:10.949436 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:16:10.975148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:16:10.998333 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:16:11.001387 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:16:11.024546 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:16:11.025401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:16:11.205686 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:16:11.205730 kernel: AES CTR mode by8 optimization enabled Apr 13 20:16:11.205742 kernel: libata version 3.00 loaded. Apr 13 20:16:11.025638 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:11.190048 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:16:11.199684 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:16:11.199856 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:11.200855 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:16:11.241789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:16:11.280268 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:16:11.280502 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:16:11.283258 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:16:11.283477 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:16:11.286253 kernel: scsi host1: ahci Apr 13 20:16:11.289253 kernel: scsi host2: ahci Apr 13 20:16:11.291050 kernel: scsi host3: ahci Apr 13 20:16:11.294263 kernel: scsi host4: ahci Apr 13 20:16:11.297435 kernel: scsi host5: ahci Apr 13 20:16:11.300983 kernel: scsi host6: ahci Apr 13 20:16:11.301166 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 13 20:16:11.301179 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 13 20:16:11.301189 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 13 20:16:11.301198 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 13 20:16:11.301213 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 13 20:16:11.301223 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 13 20:16:11.395083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:11.406381 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:16:11.424032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:11.615249 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.615326 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.621374 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.621446 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.625330 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.628489 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:16:11.638738 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:16:11.664391 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 13 20:16:11.664653 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:16:11.669697 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:16:11.669953 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:16:11.680322 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:16:11.680347 kernel: GPT:9289727 != 167739391 Apr 13 20:16:11.680359 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:16:11.683316 kernel: GPT:9289727 != 167739391 Apr 13 20:16:11.686442 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:16:11.686457 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:11.690531 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:16:11.726274 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (467) Apr 13 20:16:11.733703 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:16:11.735837 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (463) Apr 13 20:16:11.745363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:16:11.752075 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:16:11.754360 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:16:11.759797 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:16:11.770393 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:16:11.776531 disk-uuid[566]: Primary Header is updated. Apr 13 20:16:11.776531 disk-uuid[566]: Secondary Entries is updated. Apr 13 20:16:11.776531 disk-uuid[566]: Secondary Header is updated. Apr 13 20:16:11.782266 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:11.789264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:12.791360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:16:12.794762 disk-uuid[567]: The operation has completed successfully. Apr 13 20:16:12.845349 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:16:12.845469 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:16:12.855487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:16:12.861393 sh[581]: Success Apr 13 20:16:12.879254 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:16:12.919888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:16:12.929349 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:16:12.931557 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:16:12.953621 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:16:12.953665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:12.953678 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:16:12.957526 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:16:12.961702 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:16:12.970324 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:16:12.971735 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:16:12.973048 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:16:12.979371 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:16:12.983521 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:16:13.008914 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:13.008956 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:13.009171 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:16:13.016856 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:16:13.016889 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:16:13.030215 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:16:13.035396 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:13.042990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:16:13.051445 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:16:13.122283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:16:13.130465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:16:13.141537 ignition[687]: Ignition 2.19.0 Apr 13 20:16:13.141547 ignition[687]: Stage: fetch-offline Apr 13 20:16:13.141592 ignition[687]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:13.145812 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:16:13.141604 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:13.141734 ignition[687]: parsed url from cmdline: "" Apr 13 20:16:13.141742 ignition[687]: no config URL provided Apr 13 20:16:13.141751 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:16:13.141768 ignition[687]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:16:13.141778 ignition[687]: failed to fetch config: resource requires networking Apr 13 20:16:13.141981 ignition[687]: Ignition finished successfully Apr 13 20:16:13.164393 systemd-networkd[766]: lo: Link UP Apr 13 20:16:13.164402 systemd-networkd[766]: lo: Gained carrier Apr 13 20:16:13.166482 systemd-networkd[766]: Enumeration completed Apr 13 20:16:13.166573 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:16:13.167804 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:16:13.167809 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:16:13.169177 systemd[1]: Reached target network.target - Network. Apr 13 20:16:13.171330 systemd-networkd[766]: eth0: Link UP Apr 13 20:16:13.171335 systemd-networkd[766]: eth0: Gained carrier Apr 13 20:16:13.171343 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:16:13.180367 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:16:13.193763 ignition[770]: Ignition 2.19.0 Apr 13 20:16:13.193776 ignition[770]: Stage: fetch Apr 13 20:16:13.193928 ignition[770]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:13.193940 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:13.194020 ignition[770]: parsed url from cmdline: "" Apr 13 20:16:13.194024 ignition[770]: no config URL provided Apr 13 20:16:13.194030 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:16:13.194039 ignition[770]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:16:13.194056 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 13 20:16:13.194381 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:16:13.394552 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 13 20:16:13.394722 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:16:13.795299 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 13 20:16:13.795510 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:16:13.939316 systemd-networkd[766]: eth0: DHCPv4 address 172.234.25.54/24, gateway 172.234.25.1 acquired from 23.205.167.152 Apr 13 20:16:14.596320 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 13 20:16:14.693619 ignition[770]: PUT result: OK Apr 13 20:16:14.693717 ignition[770]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 13 20:16:14.809176 ignition[770]: GET result: OK Apr 13 20:16:14.809333 ignition[770]: parsing config with SHA512: ff32a685e3c18a097d0ecadc78a7a7d75524a59c8819cc162ada202b7727c80d59b172a462a1e78d47d2e9c6eee04fa3bad9a8f4f917e125cea541a88e4656b7 Apr 13 20:16:14.818206 unknown[770]: fetched base config from "system" Apr 13 20:16:14.818620 ignition[770]: fetch: fetch complete Apr 13 20:16:14.818221 unknown[770]: fetched base config from "system" Apr 13 20:16:14.818627 ignition[770]: fetch: fetch passed Apr 13 20:16:14.818250 unknown[770]: fetched user config from "akamai" Apr 13 20:16:14.818673 ignition[770]: Ignition finished successfully Apr 13 20:16:14.822404 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:16:14.830475 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:16:14.847736 ignition[777]: Ignition 2.19.0 Apr 13 20:16:14.847751 ignition[777]: Stage: kargs Apr 13 20:16:14.847939 ignition[777]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:14.847952 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:14.850611 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:16:14.848699 ignition[777]: kargs: kargs passed Apr 13 20:16:14.848748 ignition[777]: Ignition finished successfully Apr 13 20:16:14.862480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:16:14.875916 ignition[784]: Ignition 2.19.0 Apr 13 20:16:14.875933 ignition[784]: Stage: disks Apr 13 20:16:14.876132 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:14.879949 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:16:14.876146 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:14.903216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:16:14.877260 ignition[784]: disks: disks passed Apr 13 20:16:14.904532 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:16:14.877320 ignition[784]: Ignition finished successfully Apr 13 20:16:14.906457 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:16:14.908141 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:16:14.909742 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:16:14.917425 systemd-networkd[766]: eth0: Gained IPv6LL Apr 13 20:16:14.917935 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:16:14.937981 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:16:14.942009 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:16:14.950832 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:16:15.040260 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:16:15.040656 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:16:15.042085 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:16:15.053374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:16:15.057354 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:16:15.059429 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:16:15.060620 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:16:15.060647 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:16:15.068097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:16:15.070249 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (800) Apr 13 20:16:15.076255 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:15.076288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:15.076302 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:16:15.088137 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:16:15.088168 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:16:15.092536 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:16:15.096652 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:16:15.142185 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:16:15.148505 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:16:15.155391 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:16:15.162214 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:16:15.271512 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:16:15.278336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:16:15.281510 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:16:15.293881 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:16:15.297742 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:15.324340 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:16:15.329880 ignition[918]: INFO : Ignition 2.19.0 Apr 13 20:16:15.329880 ignition[918]: INFO : Stage: mount Apr 13 20:16:15.329880 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:15.329880 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:15.329880 ignition[918]: INFO : mount: mount passed Apr 13 20:16:15.329880 ignition[918]: INFO : Ignition finished successfully Apr 13 20:16:15.332997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:16:15.339391 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:16:16.046402 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:16:16.061271 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Apr 13 20:16:16.068382 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:16:16.068455 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:16:16.068471 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:16:16.077904 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:16:16.077939 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:16:16.080534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:16:16.111007 ignition[947]: INFO : Ignition 2.19.0 Apr 13 20:16:16.111007 ignition[947]: INFO : Stage: files Apr 13 20:16:16.113058 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:16.113058 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:16.113058 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:16:16.116196 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:16:16.116196 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:16:16.118340 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:16:16.119404 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:16:16.119404 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:16:16.119378 unknown[947]: wrote ssh authorized keys file for user: core Apr 13 20:16:16.122401 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:16:16.122401 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:16:16.424337 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:16:16.460123 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:16:16.460123 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:16.463141 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:16:16.872340 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:16:17.186054 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:16:17.186054 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:16:17.188792 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:16:17.188792 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:16:17.188792 ignition[947]: INFO : files: files passed Apr 13 20:16:17.188792 ignition[947]: INFO : Ignition finished successfully Apr 13 20:16:17.190680 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:16:17.220404 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:16:17.223397 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:16:17.232722 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:16:17.232857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:16:17.256461 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:16:17.256461 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:16:17.258881 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:16:17.260646 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:16:17.262118 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:16:17.268435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:16:17.293632 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:16:17.294311 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:16:17.296308 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:16:17.297652 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:16:17.299377 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:16:17.305567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:16:17.321896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:16:17.328385 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:16:17.341879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:16:17.343015 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:16:17.345127 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:16:17.346996 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:16:17.347130 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:16:17.348973 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:16:17.350072 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:16:17.351889 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:16:17.353532 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:16:17.355024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:16:17.356748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:16:17.358451 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:16:17.360153 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:16:17.361799 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:16:17.363503 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:16:17.365066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:16:17.365188 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:16:17.367075 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:16:17.368185 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:16:17.369721 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:16:17.369835 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:16:17.371488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:16:17.371600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:16:17.373741 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:16:17.373854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:16:17.374893 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:16:17.374993 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:16:17.391741 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:16:17.394449 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:16:17.395263 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:16:17.395423 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:16:17.399357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:16:17.399459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:16:17.410078 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:16:17.410210 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:16:17.415650 ignition[1000]: INFO : Ignition 2.19.0 Apr 13 20:16:17.417251 ignition[1000]: INFO : Stage: umount Apr 13 20:16:17.417251 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:16:17.417251 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:16:17.424297 ignition[1000]: INFO : umount: umount passed Apr 13 20:16:17.424297 ignition[1000]: INFO : Ignition finished successfully Apr 13 20:16:17.422847 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:16:17.422976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:16:17.423938 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:16:17.423993 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:16:17.428596 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:16:17.428656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:16:17.429845 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:16:17.429899 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:16:17.431609 systemd[1]: Stopped target network.target - Network. Apr 13 20:16:17.433073 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:16:17.433155 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:16:17.434838 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:16:17.437716 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:16:17.441458 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:16:17.442436 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:16:17.444108 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:16:17.468460 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:16:17.468518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:16:17.470453 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:16:17.470691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:16:17.472306 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:16:17.472363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:16:17.473830 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:16:17.473882 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:16:17.476056 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:16:17.477814 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:16:17.480486 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:16:17.481083 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:16:17.481423 systemd-networkd[766]: eth0: DHCPv6 lease lost Apr 13 20:16:17.481969 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:16:17.484690 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:16:17.484816 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:16:17.488330 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:16:17.488469 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:16:17.493644 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:16:17.493712 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:16:17.495813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:16:17.495877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:16:17.504908 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:16:17.505781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:16:17.505840 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:16:17.507586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:16:17.507639 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:16:17.509130 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:16:17.509183 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:16:17.510782 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:16:17.510833 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:16:17.512649 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:16:17.534501 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:16:17.534709 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:16:17.536727 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:16:17.536906 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:16:17.538822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:16:17.538897 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:16:17.540622 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:16:17.540667 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:16:17.542461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:16:17.542516 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:16:17.545313 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:16:17.545366 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:16:17.547421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:16:17.547473 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:16:17.554398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:16:17.555727 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:16:17.555785 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:16:17.558750 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:16:17.558815 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:16:17.561492 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:16:17.561552 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:16:17.562788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:16:17.562845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:17.564267 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:16:17.564396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:16:17.566206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:16:17.575431 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:16:17.583116 systemd[1]: Switching root. Apr 13 20:16:17.617413 systemd-journald[178]: Journal stopped Apr 13 20:16:18.891359 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 13 20:16:18.891389 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:16:18.891402 kernel: SELinux: policy capability open_perms=1 Apr 13 20:16:18.891412 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:16:18.891425 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:16:18.891434 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:16:18.891444 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:16:18.891453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:16:18.891462 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:16:18.891471 kernel: audit: type=1403 audit(1776111377.784:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:16:18.891481 systemd[1]: Successfully loaded SELinux policy in 59.334ms. Apr 13 20:16:18.891497 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.933ms. Apr 13 20:16:18.891508 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:16:18.891518 systemd[1]: Detected virtualization kvm. Apr 13 20:16:18.891529 systemd[1]: Detected architecture x86-64. Apr 13 20:16:18.891539 systemd[1]: Detected first boot. Apr 13 20:16:18.891553 systemd[1]: Initializing machine ID from random generator. Apr 13 20:16:18.891563 zram_generator::config[1043]: No configuration found. Apr 13 20:16:18.891574 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:16:18.891584 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:16:18.891594 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:16:18.891604 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:16:18.891615 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:16:18.891628 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:16:18.891638 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:16:18.891648 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:16:18.891658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:16:18.891668 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:16:18.891678 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:16:18.891688 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:16:18.891701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:16:18.891711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:16:18.891723 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:16:18.891733 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:16:18.891743 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:16:18.891753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:16:18.891762 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:16:18.891772 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:16:18.891786 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:16:18.891796 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:16:18.891810 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:16:18.891821 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:16:18.891831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:16:18.891841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:16:18.891851 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:16:18.891862 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:16:18.891875 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:16:18.891885 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:16:18.891895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:16:18.891906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:16:18.891916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:16:18.891930 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:16:18.891940 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:16:18.891951 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:16:18.891961 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:16:18.891971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:16:18.891982 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:16:18.891992 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:16:18.892002 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:16:18.892016 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:16:18.892026 systemd[1]: Reached target machines.target - Containers. Apr 13 20:16:18.892036 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:16:18.892047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:16:18.892057 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:16:18.892067 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:16:18.892077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:16:18.892088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:16:18.892101 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:16:18.892111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:16:18.892121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:16:18.892132 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:16:18.892142 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:16:18.892152 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:16:18.892162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:16:18.892173 kernel: fuse: init (API version 7.39) Apr 13 20:16:18.892187 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:16:18.892197 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:16:18.892208 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:16:18.892218 kernel: loop: module loaded Apr 13 20:16:18.892228 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:16:18.892267 kernel: ACPI: bus type drm_connector registered Apr 13 20:16:18.892278 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:16:18.892288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:16:18.892319 systemd-journald[1126]: Collecting audit messages is disabled. Apr 13 20:16:18.892344 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:16:18.892355 systemd[1]: Stopped verity-setup.service. Apr 13 20:16:18.892366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:16:18.892376 systemd-journald[1126]: Journal started Apr 13 20:16:18.892399 systemd-journald[1126]: Runtime Journal (/run/log/journal/c0e9d9eb3a454878a44cd269f4cf5a65) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:16:18.477364 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:16:18.498223 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:16:18.498847 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:16:18.903247 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:16:18.905766 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:16:18.906649 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:16:18.908458 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:16:18.910438 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:16:18.911312 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:16:18.912977 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:16:18.914009 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:16:18.916747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:16:18.918687 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:16:18.918988 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:16:18.920251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:16:18.920542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:16:18.922036 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:16:18.922344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:16:18.923493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:16:18.923685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:16:18.924934 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:16:18.925300 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:16:18.926451 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:16:18.926684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:16:18.928211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:16:18.929394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:16:18.930583 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:16:18.947649 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:16:18.957296 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:16:18.988319 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:16:18.991310 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:16:18.991404 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:16:18.993267 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:16:19.002492 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:16:19.005405 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:16:19.006427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:16:19.011427 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:16:19.019674 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:16:19.020873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:16:19.024336 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:16:19.025215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:16:19.032351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:16:19.046434 systemd-journald[1126]: Time spent on flushing to /var/log/journal/c0e9d9eb3a454878a44cd269f4cf5a65 is 77.439ms for 972 entries. Apr 13 20:16:19.046434 systemd-journald[1126]: System Journal (/var/log/journal/c0e9d9eb3a454878a44cd269f4cf5a65) is 8.0M, max 195.6M, 187.6M free. Apr 13 20:16:19.162285 systemd-journald[1126]: Received client request to flush runtime journal. Apr 13 20:16:19.162349 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 20:16:19.162385 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:16:19.048379 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:16:19.054806 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:16:19.064617 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:16:19.066786 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:16:19.069170 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:16:19.072367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:16:19.086365 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:16:19.088524 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:16:19.089617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:16:19.097646 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:16:19.134835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:16:19.142465 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 20:16:19.160533 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:16:19.167929 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:16:19.171642 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:16:19.191027 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Apr 13 20:16:19.196357 kernel: loop1: detected capacity change from 0 to 228704 Apr 13 20:16:19.191715 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Apr 13 20:16:19.209772 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:16:19.217429 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:16:19.260446 kernel: loop2: detected capacity change from 0 to 8 Apr 13 20:16:19.274119 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:16:19.285363 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:16:19.286867 kernel: loop3: detected capacity change from 0 to 140768 Apr 13 20:16:19.304190 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 13 20:16:19.304543 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 13 20:16:19.321420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:16:19.338284 kernel: loop4: detected capacity change from 0 to 142488 Apr 13 20:16:19.363276 kernel: loop5: detected capacity change from 0 to 228704 Apr 13 20:16:19.389289 kernel: loop6: detected capacity change from 0 to 8 Apr 13 20:16:19.398276 kernel: loop7: detected capacity change from 0 to 140768 Apr 13 20:16:19.414924 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 13 20:16:19.415746 (sd-merge)[1191]: Merged extensions into '/usr'. Apr 13 20:16:19.424063 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:16:19.424250 systemd[1]: Reloading... Apr 13 20:16:19.511527 zram_generator::config[1217]: No configuration found. Apr 13 20:16:19.618963 ldconfig[1158]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:16:19.684550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:16:19.728336 systemd[1]: Reloading finished in 303 ms. Apr 13 20:16:19.754656 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:16:19.756390 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:16:19.757789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:16:19.768485 systemd[1]: Starting ensure-sysext.service... Apr 13 20:16:19.772610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:16:19.780537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:16:19.791372 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:16:19.791396 systemd[1]: Reloading... Apr 13 20:16:19.802886 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:16:19.803738 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:16:19.805003 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:16:19.805610 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 13 20:16:19.805697 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 13 20:16:19.810838 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:16:19.810912 systemd-tmpfiles[1262]: Skipping /boot Apr 13 20:16:19.823735 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:16:19.823800 systemd-tmpfiles[1262]: Skipping /boot Apr 13 20:16:19.847515 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Apr 13 20:16:19.893300 zram_generator::config[1289]: No configuration found. Apr 13 20:16:20.095275 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 20:16:20.104242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:16:20.133337 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 13 20:16:20.148601 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:16:20.168340 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:16:20.177256 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:16:20.192488 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:16:20.192737 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:16:20.199359 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:16:20.206038 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:16:20.206150 systemd[1]: Reloading finished in 414 ms. Apr 13 20:16:20.220246 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1315) Apr 13 20:16:20.226916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:16:20.230009 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:16:20.283270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:16:20.292030 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:16:20.297447 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:16:20.309589 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:16:20.312663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:16:20.314481 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:16:20.324954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:16:20.334912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:16:20.336833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:16:20.339453 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:16:20.344260 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:16:20.351647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:16:20.363551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:16:20.370126 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:16:20.384484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:16:20.385823 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:16:20.388790 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:16:20.391775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:16:20.392610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:16:20.394159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:16:20.394830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:16:20.396922 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:16:20.397846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:16:20.415147 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:16:20.424679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:16:20.424939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:16:20.432626 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:16:20.442748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:16:20.447982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:16:20.455087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:16:20.460159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:16:20.462930 augenrules[1403]: No rules Apr 13 20:16:20.461059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:16:20.465830 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:16:20.466819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:16:20.469074 systemd[1]: Finished ensure-sysext.service. Apr 13 20:16:20.472185 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:16:20.475679 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:16:20.476941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:16:20.477750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:16:20.484412 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:16:20.489780 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:16:20.490051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:16:20.493388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:16:20.494611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:16:20.507338 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:16:20.515909 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:16:20.517160 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:16:20.517622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:16:20.523248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:16:20.524409 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:16:20.533630 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:16:20.624435 systemd-networkd[1380]: lo: Link UP Apr 13 20:16:20.624452 systemd-networkd[1380]: lo: Gained carrier Apr 13 20:16:20.626571 systemd-networkd[1380]: Enumeration completed Apr 13 20:16:20.627038 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:16:20.627053 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:16:20.628197 systemd-networkd[1380]: eth0: Link UP Apr 13 20:16:20.628213 systemd-networkd[1380]: eth0: Gained carrier Apr 13 20:16:20.628225 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:16:20.662103 systemd-resolved[1381]: Positive Trust Anchors: Apr 13 20:16:20.662380 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:16:20.662464 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:16:20.667882 systemd-resolved[1381]: Defaulting to hostname 'linux'. Apr 13 20:16:20.678389 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:16:20.679144 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:16:20.679442 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:16:20.680621 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:16:20.681634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:16:20.683068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:16:20.684270 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:16:20.687804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:16:20.688805 systemd[1]: Reached target network.target - Network. Apr 13 20:16:20.689640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:16:20.698573 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:16:20.701722 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:16:20.704842 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:16:20.716572 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:16:20.745108 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:16:20.755629 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:16:20.756544 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:16:20.757491 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:16:20.758348 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:16:20.759163 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:16:20.759980 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:16:20.760016 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:16:20.760733 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:16:20.761692 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:16:20.762638 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:16:20.763445 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:16:20.765046 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:16:20.767792 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:16:20.774069 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:16:20.775860 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:16:20.776894 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:16:20.777748 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:16:20.778581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:16:20.778621 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:16:20.779892 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:16:20.783387 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:16:20.788887 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:16:20.791331 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:16:20.796315 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:16:20.797312 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:16:20.804340 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:16:20.822828 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:16:20.830721 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:16:20.837107 jq[1445]: false Apr 13 20:16:20.840672 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:16:20.853421 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:16:20.856194 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:16:20.857185 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:16:20.873610 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:16:20.877606 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:16:20.882668 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:16:20.883355 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:16:20.887781 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:16:20.888000 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:16:20.906071 jq[1463]: true Apr 13 20:16:20.906553 coreos-metadata[1443]: Apr 13 20:16:20.901 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 13 20:16:20.908884 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:16:20.910268 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:16:20.920784 dbus-daemon[1444]: [system] SELinux support is enabled Apr 13 20:16:20.922301 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:16:20.930358 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:16:20.930396 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:16:20.935497 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:16:20.935528 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:16:20.940547 extend-filesystems[1446]: Found loop4 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found loop5 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found loop6 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found loop7 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda1 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda2 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda3 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found usr Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda4 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda6 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda7 Apr 13 20:16:20.943513 extend-filesystems[1446]: Found sda9 Apr 13 20:16:20.943513 extend-filesystems[1446]: Checking size of /dev/sda9 Apr 13 20:16:20.972883 tar[1466]: linux-amd64/LICENSE Apr 13 20:16:20.972883 tar[1466]: linux-amd64/helm Apr 13 20:16:20.973160 update_engine[1462]: I20260413 20:16:20.960466 1462 main.cc:92] Flatcar Update Engine starting Apr 13 20:16:20.973160 update_engine[1462]: I20260413 20:16:20.972379 1462 update_check_scheduler.cc:74] Next update check in 9m27s Apr 13 20:16:20.975372 jq[1471]: true Apr 13 20:16:20.973153 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:16:20.974830 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:16:20.978384 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:16:20.998685 extend-filesystems[1446]: Resized partition /dev/sda9 Apr 13 20:16:21.010545 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:16:21.028267 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 13 20:16:21.125801 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:16:21.124023 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (Power Button) Apr 13 20:16:21.124048 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:16:21.129144 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:16:21.139381 systemd-logind[1458]: New seat seat0. Apr 13 20:16:21.150211 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:16:21.170053 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1293) Apr 13 20:16:21.170524 systemd[1]: Starting sshkeys.service... Apr 13 20:16:21.233564 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:16:21.245514 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:16:21.318488 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:16:21.366379 systemd-networkd[1380]: eth0: DHCPv4 address 172.234.25.54/24, gateway 172.234.25.1 acquired from 23.205.167.152 Apr 13 20:16:21.367721 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Apr 13 20:16:21.371376 dbus-daemon[1444]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1380 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:16:21.376727 coreos-metadata[1515]: Apr 13 20:16:21.376 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 13 20:16:21.384503 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:16:21.412306 containerd[1477]: time="2026-04-13T20:16:21.411985472Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:16:21.485497 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.484748048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.493606043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.493630953Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.493646853Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.500455436Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.500498456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.500570006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.500589376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.500752476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.500767336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501270 containerd[1477]: time="2026-04-13T20:16:21.500780606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501551 coreos-metadata[1515]: Apr 13 20:16:21.485 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 13 20:16:21.501585 containerd[1477]: time="2026-04-13T20:16:21.500789706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501585 containerd[1477]: time="2026-04-13T20:16:21.500882396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:16:21.501585 containerd[1477]: time="2026-04-13T20:16:21.501124456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:16:21.502071 containerd[1477]: time="2026-04-13T20:16:21.501741247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:16:21.502071 containerd[1477]: time="2026-04-13T20:16:21.501760867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:16:21.502071 containerd[1477]: time="2026-04-13T20:16:21.501867787Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:16:21.502071 containerd[1477]: time="2026-04-13T20:16:21.501923577Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:16:21.504448 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:16:21.504448 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 13 20:16:21.504448 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 13 20:16:22.288337 extend-filesystems[1446]: Resized filesystem in /dev/sda9 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.508376700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.508429820Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.508447310Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.508502760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.508519390Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.508646250Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.508909530Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.509018830Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.509032930Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.509044770Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.509058440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.509070660Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.509083050Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:16:22.291593 containerd[1477]: time="2026-04-13T20:16:21.509096540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:16:21.506903 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509110350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509122480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509134900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509145250Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509165750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509180890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509196840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509208800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509220370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509294961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509310961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509368941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509383451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.293661 containerd[1477]: time="2026-04-13T20:16:21.509424681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:16:21.507283 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.509437321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.509449331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.509461001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.509474411Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.509492231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.509502181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.509512071Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.511384442Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.511408852Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.511483822Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.511497942Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.511511352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.511523962Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:16:22.294991 containerd[1477]: time="2026-04-13T20:16:21.511562212Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:16:22.285668 systemd-timesyncd[1422]: Contacted time server 144.202.66.214:123 (0.flatcar.pool.ntp.org). Apr 13 20:16:22.295436 containerd[1477]: time="2026-04-13T20:16:21.511572982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:16:22.285720 systemd-timesyncd[1422]: Initial clock synchronization to Mon 2026-04-13 20:16:22.285519 UTC. Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:21.511811442Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:21.511862372Z" level=info msg="Connect containerd service" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:21.511893022Z" level=info msg="using legacy CRI server" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:21.511899642Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:21.511967922Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:21.512864092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.287232822Z" level=info msg="Start subscribing containerd event" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.287284042Z" level=info msg="Start recovering state" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.287344682Z" level=info msg="Start event monitor" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.287360722Z" level=info msg="Start snapshots syncer" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.287369492Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.287379792Z" level=info msg="Start streaming server" Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.289261903Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:16:22.295489 containerd[1477]: time="2026-04-13T20:16:22.289314383Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:16:22.286433 systemd-resolved[1381]: Clock change detected. Flushing caches. Apr 13 20:16:22.302307 containerd[1477]: time="2026-04-13T20:16:22.296075577Z" level=info msg="containerd successfully booted in 0.114877s" Apr 13 20:16:22.297473 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:16:22.315462 dbus-daemon[1444]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:16:22.315594 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:16:22.317527 dbus-daemon[1444]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1521 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:16:22.328061 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:16:22.370874 polkitd[1527]: Started polkitd version 121 Apr 13 20:16:22.378445 polkitd[1527]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:16:22.378515 polkitd[1527]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:16:22.381455 polkitd[1527]: Finished loading, compiling and executing 2 rules Apr 13 20:16:22.382168 dbus-daemon[1444]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:16:22.382330 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:16:22.384570 polkitd[1527]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:16:22.395627 coreos-metadata[1515]: Apr 13 20:16:22.395 INFO Fetch successful Apr 13 20:16:22.413124 systemd-hostnamed[1521]: Hostname set to <172-234-25-54> (transient) Apr 13 20:16:22.413932 systemd-resolved[1381]: System hostname changed to '172-234-25-54'. Apr 13 20:16:22.420669 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:16:22.433280 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:16:22.436068 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:16:22.444782 systemd[1]: Finished sshkeys.service. Apr 13 20:16:22.456202 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:16:22.466001 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:16:22.477571 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:16:22.485048 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:16:22.503713 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:16:22.515021 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:16:22.525376 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:16:22.533160 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:16:22.534642 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:16:22.617093 tar[1466]: linux-amd64/README.md Apr 13 20:16:22.634134 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:16:22.685328 coreos-metadata[1443]: Apr 13 20:16:22.685 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 13 20:16:22.777450 coreos-metadata[1443]: Apr 13 20:16:22.777 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 13 20:16:22.958943 coreos-metadata[1443]: Apr 13 20:16:22.958 INFO Fetch successful Apr 13 20:16:22.959112 coreos-metadata[1443]: Apr 13 20:16:22.959 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 13 20:16:23.222517 coreos-metadata[1443]: Apr 13 20:16:23.222 INFO Fetch successful Apr 13 20:16:23.242942 systemd-networkd[1380]: eth0: Gained IPv6LL Apr 13 20:16:23.251486 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:16:23.258479 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:16:23.271628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:16:23.275270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:16:23.312931 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:16:23.353559 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:16:23.355767 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:16:24.228106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:24.229390 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:16:24.229624 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:16:24.231502 systemd[1]: Startup finished in 1.046s (kernel) + 8.048s (initrd) + 5.732s (userspace) = 14.826s. Apr 13 20:16:24.823153 kubelet[1596]: E0413 20:16:24.823095 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:16:24.827539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:16:24.828026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:16:26.137330 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:16:26.138663 systemd[1]: Started sshd@0-172.234.25.54:22-50.85.169.122:51892.service - OpenSSH per-connection server daemon (50.85.169.122:51892). Apr 13 20:16:26.862442 sshd[1608]: Accepted publickey for core from 50.85.169.122 port 51892 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:16:26.864763 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:26.879641 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:16:26.886305 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:16:26.888711 systemd-logind[1458]: New session 1 of user core. Apr 13 20:16:26.899695 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:16:26.906351 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:16:26.920229 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:16:27.027558 systemd[1612]: Queued start job for default target default.target. Apr 13 20:16:27.039269 systemd[1612]: Created slice app.slice - User Application Slice. Apr 13 20:16:27.039301 systemd[1612]: Reached target paths.target - Paths. Apr 13 20:16:27.039315 systemd[1612]: Reached target timers.target - Timers. Apr 13 20:16:27.041057 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:16:27.054616 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:16:27.054983 systemd[1612]: Reached target sockets.target - Sockets. Apr 13 20:16:27.055009 systemd[1612]: Reached target basic.target - Basic System. Apr 13 20:16:27.055051 systemd[1612]: Reached target default.target - Main User Target. Apr 13 20:16:27.055090 systemd[1612]: Startup finished in 127ms. Apr 13 20:16:27.055200 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:16:27.067997 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:16:27.591065 systemd[1]: Started sshd@1-172.234.25.54:22-50.85.169.122:51896.service - OpenSSH per-connection server daemon (50.85.169.122:51896). Apr 13 20:16:28.298758 sshd[1623]: Accepted publickey for core from 50.85.169.122 port 51896 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:16:28.300677 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:28.307285 systemd-logind[1458]: New session 2 of user core. Apr 13 20:16:28.318012 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:16:28.804511 sshd[1623]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:28.809189 systemd[1]: sshd@1-172.234.25.54:22-50.85.169.122:51896.service: Deactivated successfully. Apr 13 20:16:28.811697 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:16:28.813699 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:16:28.815392 systemd-logind[1458]: Removed session 2. Apr 13 20:16:28.931858 systemd[1]: Started sshd@2-172.234.25.54:22-50.85.169.122:51912.service - OpenSSH per-connection server daemon (50.85.169.122:51912). Apr 13 20:16:29.647195 sshd[1630]: Accepted publickey for core from 50.85.169.122 port 51912 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:16:29.649003 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:29.653988 systemd-logind[1458]: New session 3 of user core. Apr 13 20:16:29.660188 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:16:30.147079 sshd[1630]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:30.151746 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:16:30.152871 systemd[1]: sshd@2-172.234.25.54:22-50.85.169.122:51912.service: Deactivated successfully. Apr 13 20:16:30.154702 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:16:30.156039 systemd-logind[1458]: Removed session 3. Apr 13 20:16:30.278295 systemd[1]: Started sshd@3-172.234.25.54:22-50.85.169.122:46960.service - OpenSSH per-connection server daemon (50.85.169.122:46960). Apr 13 20:16:30.992008 sshd[1637]: Accepted publickey for core from 50.85.169.122 port 46960 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:16:30.994348 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:30.999644 systemd-logind[1458]: New session 4 of user core. Apr 13 20:16:31.005966 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:16:31.494769 sshd[1637]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:31.499813 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:16:31.500960 systemd[1]: sshd@3-172.234.25.54:22-50.85.169.122:46960.service: Deactivated successfully. Apr 13 20:16:31.502752 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:16:31.503595 systemd-logind[1458]: Removed session 4. Apr 13 20:16:31.628221 systemd[1]: Started sshd@4-172.234.25.54:22-50.85.169.122:46976.service - OpenSSH per-connection server daemon (50.85.169.122:46976). Apr 13 20:16:32.352205 sshd[1644]: Accepted publickey for core from 50.85.169.122 port 46976 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:16:32.354101 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:32.359900 systemd-logind[1458]: New session 5 of user core. Apr 13 20:16:32.369968 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:16:32.750361 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:16:32.750719 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:16:32.773007 sudo[1647]: pam_unix(sudo:session): session closed for user root Apr 13 20:16:32.888523 sshd[1644]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:32.892765 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:16:32.893924 systemd[1]: sshd@4-172.234.25.54:22-50.85.169.122:46976.service: Deactivated successfully. Apr 13 20:16:32.896652 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:16:32.897703 systemd-logind[1458]: Removed session 5. Apr 13 20:16:33.013626 systemd[1]: Started sshd@5-172.234.25.54:22-50.85.169.122:46978.service - OpenSSH per-connection server daemon (50.85.169.122:46978). Apr 13 20:16:33.730116 sshd[1652]: Accepted publickey for core from 50.85.169.122 port 46978 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:16:33.730788 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:33.735804 systemd-logind[1458]: New session 6 of user core. Apr 13 20:16:33.741982 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:16:34.120080 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:16:34.120498 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:16:34.124849 sudo[1656]: pam_unix(sudo:session): session closed for user root Apr 13 20:16:34.131223 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:16:34.131597 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:16:34.151142 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:16:34.152743 auditctl[1659]: No rules Apr 13 20:16:34.153189 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:16:34.153413 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:16:34.156655 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:16:34.185893 augenrules[1677]: No rules Apr 13 20:16:34.187519 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:16:34.188659 sudo[1655]: pam_unix(sudo:session): session closed for user root Apr 13 20:16:34.304177 sshd[1652]: pam_unix(sshd:session): session closed for user core Apr 13 20:16:34.308290 systemd[1]: sshd@5-172.234.25.54:22-50.85.169.122:46978.service: Deactivated successfully. Apr 13 20:16:34.310319 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:16:34.311044 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:16:34.312094 systemd-logind[1458]: Removed session 6. Apr 13 20:16:34.436070 systemd[1]: Started sshd@6-172.234.25.54:22-50.85.169.122:46980.service - OpenSSH per-connection server daemon (50.85.169.122:46980). Apr 13 20:16:35.030749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:16:35.038255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:16:35.155462 sshd[1685]: Accepted publickey for core from 50.85.169.122 port 46980 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:16:35.155075 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:16:35.164076 systemd-logind[1458]: New session 7 of user core. Apr 13 20:16:35.169172 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:16:35.221930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:35.226519 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:16:35.283756 kubelet[1696]: E0413 20:16:35.283610 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:16:35.289773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:16:35.290127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:16:35.542632 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:16:35.543078 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:16:35.838270 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:16:35.840258 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:16:36.126259 dockerd[1718]: time="2026-04-13T20:16:36.126073827Z" level=info msg="Starting up" Apr 13 20:16:36.199475 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2566431454-merged.mount: Deactivated successfully. Apr 13 20:16:36.208972 systemd[1]: var-lib-docker-metacopy\x2dcheck3732236765-merged.mount: Deactivated successfully. Apr 13 20:16:36.236860 dockerd[1718]: time="2026-04-13T20:16:36.236468832Z" level=info msg="Loading containers: start." Apr 13 20:16:36.369203 kernel: Initializing XFRM netlink socket Apr 13 20:16:36.462800 systemd-networkd[1380]: docker0: Link UP Apr 13 20:16:36.489723 dockerd[1718]: time="2026-04-13T20:16:36.489671008Z" level=info msg="Loading containers: done." Apr 13 20:16:36.508315 dockerd[1718]: time="2026-04-13T20:16:36.508256388Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:16:36.508666 dockerd[1718]: time="2026-04-13T20:16:36.508392478Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:16:36.508760 dockerd[1718]: time="2026-04-13T20:16:36.508726378Z" level=info msg="Daemon has completed initialization" Apr 13 20:16:36.551483 dockerd[1718]: time="2026-04-13T20:16:36.551422589Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:16:36.553047 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:16:37.224310 containerd[1477]: time="2026-04-13T20:16:37.224267206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 20:16:37.861402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598580752.mount: Deactivated successfully. Apr 13 20:16:38.972636 containerd[1477]: time="2026-04-13T20:16:38.972584059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:38.973649 containerd[1477]: time="2026-04-13T20:16:38.973625350Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29989425" Apr 13 20:16:38.974048 containerd[1477]: time="2026-04-13T20:16:38.974009190Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:38.976429 containerd[1477]: time="2026-04-13T20:16:38.976393741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:38.978146 containerd[1477]: time="2026-04-13T20:16:38.977499862Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 1.753193416s" Apr 13 20:16:38.978146 containerd[1477]: time="2026-04-13T20:16:38.977534792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 20:16:38.983718 containerd[1477]: time="2026-04-13T20:16:38.983682605Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 20:16:40.358676 containerd[1477]: time="2026-04-13T20:16:40.358613502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:40.359756 containerd[1477]: time="2026-04-13T20:16:40.359599152Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021915" Apr 13 20:16:40.360469 containerd[1477]: time="2026-04-13T20:16:40.360421933Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:40.363730 containerd[1477]: time="2026-04-13T20:16:40.363699834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:40.366663 containerd[1477]: time="2026-04-13T20:16:40.366639276Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 1.382920841s" Apr 13 20:16:40.366725 containerd[1477]: time="2026-04-13T20:16:40.366666766Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 20:16:40.367107 containerd[1477]: time="2026-04-13T20:16:40.367089166Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 20:16:41.648818 containerd[1477]: time="2026-04-13T20:16:41.648738006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:41.650370 containerd[1477]: time="2026-04-13T20:16:41.650297037Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162759" Apr 13 20:16:41.651192 containerd[1477]: time="2026-04-13T20:16:41.650489237Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:41.654950 containerd[1477]: time="2026-04-13T20:16:41.654908109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:41.656122 containerd[1477]: time="2026-04-13T20:16:41.656082250Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 1.288966324s" Apr 13 20:16:41.656122 containerd[1477]: time="2026-04-13T20:16:41.656119060Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 20:16:41.658297 containerd[1477]: time="2026-04-13T20:16:41.658234821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 20:16:42.693968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791999232.mount: Deactivated successfully. Apr 13 20:16:43.075525 containerd[1477]: time="2026-04-13T20:16:43.075183199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:43.076123 containerd[1477]: time="2026-04-13T20:16:43.076074079Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828769" Apr 13 20:16:43.077707 containerd[1477]: time="2026-04-13T20:16:43.077667630Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:43.078683 containerd[1477]: time="2026-04-13T20:16:43.078353121Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.42008025s" Apr 13 20:16:43.078683 containerd[1477]: time="2026-04-13T20:16:43.078398681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 20:16:43.078991 containerd[1477]: time="2026-04-13T20:16:43.078965501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 20:16:43.079152 containerd[1477]: time="2026-04-13T20:16:43.079131701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:43.595860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657091504.mount: Deactivated successfully. Apr 13 20:16:44.307927 containerd[1477]: time="2026-04-13T20:16:44.306895434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:44.307927 containerd[1477]: time="2026-04-13T20:16:44.307571825Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Apr 13 20:16:44.308483 containerd[1477]: time="2026-04-13T20:16:44.308450235Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:44.310860 containerd[1477]: time="2026-04-13T20:16:44.310815606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:44.311980 containerd[1477]: time="2026-04-13T20:16:44.311948077Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.232953176s" Apr 13 20:16:44.312051 containerd[1477]: time="2026-04-13T20:16:44.312035397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 20:16:44.312852 containerd[1477]: time="2026-04-13T20:16:44.312808497Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 20:16:44.835843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887854634.mount: Deactivated successfully. Apr 13 20:16:44.841408 containerd[1477]: time="2026-04-13T20:16:44.841366331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:44.842130 containerd[1477]: time="2026-04-13T20:16:44.842095682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 13 20:16:44.843548 containerd[1477]: time="2026-04-13T20:16:44.842842012Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:44.844930 containerd[1477]: time="2026-04-13T20:16:44.844895623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:44.845821 containerd[1477]: time="2026-04-13T20:16:44.845587734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 532.726377ms" Apr 13 20:16:44.845821 containerd[1477]: time="2026-04-13T20:16:44.845616604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 20:16:44.846561 containerd[1477]: time="2026-04-13T20:16:44.846123994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 20:16:45.361345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:16:45.368063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:16:45.378659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867275131.mount: Deactivated successfully. Apr 13 20:16:45.546986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:45.551507 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:16:45.591822 kubelet[2008]: E0413 20:16:45.591743 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:16:45.595559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:16:45.595771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:16:46.198207 containerd[1477]: time="2026-04-13T20:16:46.198122089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:46.199256 containerd[1477]: time="2026-04-13T20:16:46.199213360Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718846" Apr 13 20:16:46.199750 containerd[1477]: time="2026-04-13T20:16:46.199711090Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:46.202453 containerd[1477]: time="2026-04-13T20:16:46.202001821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:16:46.203054 containerd[1477]: time="2026-04-13T20:16:46.203027582Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.356880758s" Apr 13 20:16:46.203098 containerd[1477]: time="2026-04-13T20:16:46.203059142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 20:16:48.982922 systemd[1]: Started sshd@7-172.234.25.54:22-195.18.19.246:41012.service - OpenSSH per-connection server daemon (195.18.19.246:41012). Apr 13 20:16:49.099684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:49.107055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:16:49.150023 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-7.scope)... Apr 13 20:16:49.150158 systemd[1]: Reloading... Apr 13 20:16:49.339853 zram_generator::config[2139]: No configuration found. Apr 13 20:16:49.516369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:16:49.602873 systemd[1]: Reloading finished in 451 ms. Apr 13 20:16:49.658712 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:16:49.658817 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:16:49.659335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:49.669182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:16:49.833673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:49.845401 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:16:49.886312 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:16:49.886312 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:16:49.886312 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:16:49.886312 kubelet[2192]: I0413 20:16:49.886283 2192 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:16:50.088280 kubelet[2192]: I0413 20:16:50.088206 2192 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:16:50.088280 kubelet[2192]: I0413 20:16:50.088249 2192 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:16:50.088577 kubelet[2192]: I0413 20:16:50.088522 2192 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:16:50.120604 kubelet[2192]: E0413 20:16:50.120528 2192 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.25.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:16:50.125883 kubelet[2192]: I0413 20:16:50.125611 2192 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:16:50.136756 kubelet[2192]: E0413 20:16:50.136610 2192 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:16:50.136756 kubelet[2192]: I0413 20:16:50.136657 2192 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:16:50.140732 kubelet[2192]: I0413 20:16:50.140710 2192 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:16:50.141573 kubelet[2192]: I0413 20:16:50.141535 2192 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:16:50.141755 kubelet[2192]: I0413 20:16:50.141566 2192 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-25-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:16:50.141857 kubelet[2192]: I0413 20:16:50.141755 2192 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:16:50.141857 kubelet[2192]: I0413 20:16:50.141765 2192 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:16:50.141984 kubelet[2192]: I0413 20:16:50.141954 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:16:50.147196 kubelet[2192]: I0413 20:16:50.147179 2192 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:16:50.147256 kubelet[2192]: I0413 20:16:50.147203 2192 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:16:50.147256 kubelet[2192]: I0413 20:16:50.147248 2192 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:16:50.149814 kubelet[2192]: I0413 20:16:50.149529 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:16:50.154491 kubelet[2192]: E0413 20:16:50.153252 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.25.54:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-25-54&limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:16:50.154491 kubelet[2192]: E0413 20:16:50.153628 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.25.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:16:50.154752 kubelet[2192]: I0413 20:16:50.154726 2192 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:16:50.155457 kubelet[2192]: I0413 20:16:50.155274 2192 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:16:50.156618 kubelet[2192]: W0413 20:16:50.156047 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:16:50.161120 kubelet[2192]: I0413 20:16:50.161087 2192 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:16:50.161172 kubelet[2192]: I0413 20:16:50.161150 2192 server.go:1289] "Started kubelet" Apr 13 20:16:50.161429 kubelet[2192]: I0413 20:16:50.161398 2192 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:16:50.162522 kubelet[2192]: I0413 20:16:50.162490 2192 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:16:50.163475 kubelet[2192]: I0413 20:16:50.163397 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:16:50.163780 kubelet[2192]: I0413 20:16:50.163747 2192 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:16:50.165869 kubelet[2192]: E0413 20:16:50.163879 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.25.54:6443/api/v1/namespaces/default/events\": dial tcp 172.234.25.54:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-25-54.18a603ead42bc95b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-25-54,UID:172-234-25-54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-25-54,},FirstTimestamp:2026-04-13 20:16:50.161109339 +0000 UTC m=+0.310832516,LastTimestamp:2026-04-13 20:16:50.161109339 +0000 UTC m=+0.310832516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-25-54,}" Apr 13 20:16:50.166940 kubelet[2192]: I0413 20:16:50.166770 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:16:50.168584 kubelet[2192]: I0413 20:16:50.168379 2192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:16:50.171979 kubelet[2192]: E0413 20:16:50.171579 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-25-54\" not found" Apr 13 20:16:50.171979 kubelet[2192]: I0413 20:16:50.171614 2192 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:16:50.173766 kubelet[2192]: I0413 20:16:50.173067 2192 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:16:50.173766 kubelet[2192]: I0413 20:16:50.173119 2192 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:16:50.173766 kubelet[2192]: E0413 20:16:50.173418 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.25.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:16:50.174789 kubelet[2192]: I0413 20:16:50.174770 2192 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:16:50.175576 kubelet[2192]: E0413 20:16:50.174961 2192 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:16:50.175576 kubelet[2192]: I0413 20:16:50.175016 2192 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:16:50.175576 kubelet[2192]: E0413 20:16:50.175524 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.25.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-25-54?timeout=10s\": dial tcp 172.234.25.54:6443: connect: connection refused" interval="200ms" Apr 13 20:16:50.176792 kubelet[2192]: I0413 20:16:50.176774 2192 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:16:50.205856 kubelet[2192]: I0413 20:16:50.204781 2192 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:16:50.212706 kubelet[2192]: I0413 20:16:50.212688 2192 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:16:50.212794 kubelet[2192]: I0413 20:16:50.212780 2192 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:16:50.212921 kubelet[2192]: I0413 20:16:50.212910 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:16:50.213107 kubelet[2192]: I0413 20:16:50.212735 2192 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:16:50.213151 kubelet[2192]: I0413 20:16:50.213107 2192 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:16:50.213151 kubelet[2192]: I0413 20:16:50.213137 2192 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:16:50.213151 kubelet[2192]: I0413 20:16:50.213151 2192 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:16:50.213275 kubelet[2192]: E0413 20:16:50.213220 2192 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:16:50.218001 kubelet[2192]: I0413 20:16:50.217959 2192 policy_none.go:49] "None policy: Start" Apr 13 20:16:50.218001 kubelet[2192]: I0413 20:16:50.217993 2192 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:16:50.218079 kubelet[2192]: I0413 20:16:50.218011 2192 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:16:50.219123 kubelet[2192]: E0413 20:16:50.219076 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.25.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:16:50.226792 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:16:50.238523 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:16:50.243402 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:16:50.252141 kubelet[2192]: E0413 20:16:50.252101 2192 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:16:50.252382 kubelet[2192]: I0413 20:16:50.252332 2192 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:16:50.252382 kubelet[2192]: I0413 20:16:50.252354 2192 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:16:50.253356 kubelet[2192]: I0413 20:16:50.253040 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:16:50.256308 kubelet[2192]: E0413 20:16:50.255019 2192 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:16:50.256308 kubelet[2192]: E0413 20:16:50.255072 2192 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-25-54\" not found" Apr 13 20:16:50.328651 systemd[1]: Created slice kubepods-burstable-podf779c34bcebe1d81caf1162b5529bb9f.slice - libcontainer container kubepods-burstable-podf779c34bcebe1d81caf1162b5529bb9f.slice. Apr 13 20:16:50.343795 kubelet[2192]: E0413 20:16:50.343763 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:50.348347 systemd[1]: Created slice kubepods-burstable-pod0d24b78a2663c44989d7ad534499d3db.slice - libcontainer container kubepods-burstable-pod0d24b78a2663c44989d7ad534499d3db.slice. Apr 13 20:16:50.353501 kubelet[2192]: I0413 20:16:50.353465 2192 kubelet_node_status.go:75] "Attempting to register node" node="172-234-25-54" Apr 13 20:16:50.353977 kubelet[2192]: E0413 20:16:50.353944 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.25.54:6443/api/v1/nodes\": dial tcp 172.234.25.54:6443: connect: connection refused" node="172-234-25-54" Apr 13 20:16:50.357242 kubelet[2192]: E0413 20:16:50.357224 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:50.360192 systemd[1]: Created slice kubepods-burstable-pod97671f7f349367fc70e98b1b21000d41.slice - libcontainer container kubepods-burstable-pod97671f7f349367fc70e98b1b21000d41.slice. Apr 13 20:16:50.362309 kubelet[2192]: E0413 20:16:50.362286 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:50.374290 kubelet[2192]: I0413 20:16:50.374052 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-flexvolume-dir\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:50.374290 kubelet[2192]: I0413 20:16:50.374087 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97671f7f349367fc70e98b1b21000d41-kubeconfig\") pod \"kube-scheduler-172-234-25-54\" (UID: \"97671f7f349367fc70e98b1b21000d41\") " pod="kube-system/kube-scheduler-172-234-25-54" Apr 13 20:16:50.374290 kubelet[2192]: I0413 20:16:50.374105 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f779c34bcebe1d81caf1162b5529bb9f-ca-certs\") pod \"kube-apiserver-172-234-25-54\" (UID: \"f779c34bcebe1d81caf1162b5529bb9f\") " pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:50.374290 kubelet[2192]: I0413 20:16:50.374127 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f779c34bcebe1d81caf1162b5529bb9f-k8s-certs\") pod \"kube-apiserver-172-234-25-54\" (UID: \"f779c34bcebe1d81caf1162b5529bb9f\") " pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:50.374290 kubelet[2192]: I0413 20:16:50.374149 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f779c34bcebe1d81caf1162b5529bb9f-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-25-54\" (UID: \"f779c34bcebe1d81caf1162b5529bb9f\") " pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:50.374428 kubelet[2192]: I0413 20:16:50.374165 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-ca-certs\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:50.376038 kubelet[2192]: E0413 20:16:50.376005 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.25.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-25-54?timeout=10s\": dial tcp 172.234.25.54:6443: connect: connection refused" interval="400ms" Apr 13 20:16:50.475661 kubelet[2192]: I0413 20:16:50.475344 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-k8s-certs\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:50.475661 kubelet[2192]: I0413 20:16:50.475430 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-kubeconfig\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:50.475661 kubelet[2192]: I0413 20:16:50.475477 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:50.556476 kubelet[2192]: I0413 20:16:50.556435 2192 kubelet_node_status.go:75] "Attempting to register node" node="172-234-25-54" Apr 13 20:16:50.556977 kubelet[2192]: E0413 20:16:50.556920 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.25.54:6443/api/v1/nodes\": dial tcp 172.234.25.54:6443: connect: connection refused" node="172-234-25-54" Apr 13 20:16:50.645027 kubelet[2192]: E0413 20:16:50.644990 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:50.645749 containerd[1477]: time="2026-04-13T20:16:50.645687912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-25-54,Uid:f779c34bcebe1d81caf1162b5529bb9f,Namespace:kube-system,Attempt:0,}" Apr 13 20:16:50.657944 kubelet[2192]: E0413 20:16:50.657902 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:50.658794 containerd[1477]: time="2026-04-13T20:16:50.658734398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-25-54,Uid:0d24b78a2663c44989d7ad534499d3db,Namespace:kube-system,Attempt:0,}" Apr 13 20:16:50.664514 kubelet[2192]: E0413 20:16:50.664427 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:50.666749 containerd[1477]: time="2026-04-13T20:16:50.666701862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-25-54,Uid:97671f7f349367fc70e98b1b21000d41,Namespace:kube-system,Attempt:0,}" Apr 13 20:16:50.776627 kubelet[2192]: E0413 20:16:50.776571 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.25.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-25-54?timeout=10s\": dial tcp 172.234.25.54:6443: connect: connection refused" interval="800ms" Apr 13 20:16:50.959413 kubelet[2192]: I0413 20:16:50.959356 2192 kubelet_node_status.go:75] "Attempting to register node" node="172-234-25-54" Apr 13 20:16:50.959962 kubelet[2192]: E0413 20:16:50.959640 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.25.54:6443/api/v1/nodes\": dial tcp 172.234.25.54:6443: connect: connection refused" node="172-234-25-54" Apr 13 20:16:51.010929 kubelet[2192]: E0413 20:16:51.010875 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.25.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:16:51.040981 kubelet[2192]: E0413 20:16:51.040885 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.25.54:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-25-54&limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:16:51.136811 kubelet[2192]: E0413 20:16:51.136751 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.25.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:16:51.240386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245819169.mount: Deactivated successfully. Apr 13 20:16:51.246566 containerd[1477]: time="2026-04-13T20:16:51.246500352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:16:51.247677 containerd[1477]: time="2026-04-13T20:16:51.247627802Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:16:51.249149 containerd[1477]: time="2026-04-13T20:16:51.248964653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:16:51.249149 containerd[1477]: time="2026-04-13T20:16:51.249003003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 13 20:16:51.250175 containerd[1477]: time="2026-04-13T20:16:51.250146354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:16:51.250336 containerd[1477]: time="2026-04-13T20:16:51.250312504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:16:51.259909 containerd[1477]: time="2026-04-13T20:16:51.258629158Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:16:51.259909 containerd[1477]: time="2026-04-13T20:16:51.259456078Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.679336ms" Apr 13 20:16:51.261624 containerd[1477]: time="2026-04-13T20:16:51.261591129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.715301ms" Apr 13 20:16:51.262469 containerd[1477]: time="2026-04-13T20:16:51.262437440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:16:51.263030 containerd[1477]: time="2026-04-13T20:16:51.262997220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.221558ms" Apr 13 20:16:51.436459 kubelet[2192]: E0413 20:16:51.431597 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.25.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.25.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:16:51.466010 containerd[1477]: time="2026-04-13T20:16:51.465938701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:16:51.466186 containerd[1477]: time="2026-04-13T20:16:51.466156392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:16:51.466916 containerd[1477]: time="2026-04-13T20:16:51.466889112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:51.467077 containerd[1477]: time="2026-04-13T20:16:51.467048082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:51.472628 containerd[1477]: time="2026-04-13T20:16:51.472287265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:16:51.472703 containerd[1477]: time="2026-04-13T20:16:51.472617345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:16:51.472703 containerd[1477]: time="2026-04-13T20:16:51.472632275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:51.472764 containerd[1477]: time="2026-04-13T20:16:51.472707475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:51.488888 containerd[1477]: time="2026-04-13T20:16:51.488790363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:16:51.489306 containerd[1477]: time="2026-04-13T20:16:51.489019143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:16:51.489438 containerd[1477]: time="2026-04-13T20:16:51.489403273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:51.493364 containerd[1477]: time="2026-04-13T20:16:51.493322575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:16:51.493980 systemd[1]: Started cri-containerd-d5a4f74dfa2c6e95345695a47c9ed97c6037d3f7b6ac817b38ba435eac8b1e68.scope - libcontainer container d5a4f74dfa2c6e95345695a47c9ed97c6037d3f7b6ac817b38ba435eac8b1e68. Apr 13 20:16:51.510950 systemd[1]: Started cri-containerd-8363acc0cd430b1c97ddc65f059c9acb5378aae6bcd753d8cf140d7b0a2d972c.scope - libcontainer container 8363acc0cd430b1c97ddc65f059c9acb5378aae6bcd753d8cf140d7b0a2d972c. Apr 13 20:16:51.519482 systemd[1]: Started cri-containerd-e4931f4ca09f699c2ab177b4089712c2466969c152531af24f6692290100b027.scope - libcontainer container e4931f4ca09f699c2ab177b4089712c2466969c152531af24f6692290100b027. Apr 13 20:16:51.575768 containerd[1477]: time="2026-04-13T20:16:51.575714396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-25-54,Uid:97671f7f349367fc70e98b1b21000d41,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5a4f74dfa2c6e95345695a47c9ed97c6037d3f7b6ac817b38ba435eac8b1e68\"" Apr 13 20:16:51.578287 kubelet[2192]: E0413 20:16:51.577429 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.25.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-25-54?timeout=10s\": dial tcp 172.234.25.54:6443: connect: connection refused" interval="1.6s" Apr 13 20:16:51.582398 kubelet[2192]: E0413 20:16:51.582188 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:51.589219 containerd[1477]: time="2026-04-13T20:16:51.589169653Z" level=info msg="CreateContainer within sandbox \"d5a4f74dfa2c6e95345695a47c9ed97c6037d3f7b6ac817b38ba435eac8b1e68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:16:51.594265 containerd[1477]: time="2026-04-13T20:16:51.594233446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-25-54,Uid:f779c34bcebe1d81caf1162b5529bb9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4931f4ca09f699c2ab177b4089712c2466969c152531af24f6692290100b027\"" Apr 13 20:16:51.595413 kubelet[2192]: E0413 20:16:51.595394 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:51.599175 containerd[1477]: time="2026-04-13T20:16:51.599141568Z" level=info msg="CreateContainer within sandbox \"e4931f4ca09f699c2ab177b4089712c2466969c152531af24f6692290100b027\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:16:51.602102 containerd[1477]: time="2026-04-13T20:16:51.602070929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-25-54,Uid:0d24b78a2663c44989d7ad534499d3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"8363acc0cd430b1c97ddc65f059c9acb5378aae6bcd753d8cf140d7b0a2d972c\"" Apr 13 20:16:51.602684 kubelet[2192]: E0413 20:16:51.602661 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:51.609619 containerd[1477]: time="2026-04-13T20:16:51.609579603Z" level=info msg="CreateContainer within sandbox \"8363acc0cd430b1c97ddc65f059c9acb5378aae6bcd753d8cf140d7b0a2d972c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:16:51.609801 containerd[1477]: time="2026-04-13T20:16:51.609769193Z" level=info msg="CreateContainer within sandbox \"d5a4f74dfa2c6e95345695a47c9ed97c6037d3f7b6ac817b38ba435eac8b1e68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"230e89804a40cf22fc95acbdfd0617d9515ea6294465164e10135bab396f8936\"" Apr 13 20:16:51.610629 containerd[1477]: time="2026-04-13T20:16:51.610600354Z" level=info msg="StartContainer for \"230e89804a40cf22fc95acbdfd0617d9515ea6294465164e10135bab396f8936\"" Apr 13 20:16:51.613161 containerd[1477]: time="2026-04-13T20:16:51.613139815Z" level=info msg="CreateContainer within sandbox \"e4931f4ca09f699c2ab177b4089712c2466969c152531af24f6692290100b027\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4f83c06c140efad0d3f83c33a9a310144c8338855a79a859fdb6d357e093ed1\"" Apr 13 20:16:51.613524 containerd[1477]: time="2026-04-13T20:16:51.613498815Z" level=info msg="StartContainer for \"b4f83c06c140efad0d3f83c33a9a310144c8338855a79a859fdb6d357e093ed1\"" Apr 13 20:16:51.628061 containerd[1477]: time="2026-04-13T20:16:51.628012312Z" level=info msg="CreateContainer within sandbox \"8363acc0cd430b1c97ddc65f059c9acb5378aae6bcd753d8cf140d7b0a2d972c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"010c66a02acccafa303b88a380d288aaec9d662c7d1d1ef4defd02eee9eef761\"" Apr 13 20:16:51.628522 containerd[1477]: time="2026-04-13T20:16:51.628503783Z" level=info msg="StartContainer for \"010c66a02acccafa303b88a380d288aaec9d662c7d1d1ef4defd02eee9eef761\"" Apr 13 20:16:51.652462 systemd[1]: Started cri-containerd-b4f83c06c140efad0d3f83c33a9a310144c8338855a79a859fdb6d357e093ed1.scope - libcontainer container b4f83c06c140efad0d3f83c33a9a310144c8338855a79a859fdb6d357e093ed1. Apr 13 20:16:51.665324 systemd[1]: Started cri-containerd-230e89804a40cf22fc95acbdfd0617d9515ea6294465164e10135bab396f8936.scope - libcontainer container 230e89804a40cf22fc95acbdfd0617d9515ea6294465164e10135bab396f8936. Apr 13 20:16:51.695948 systemd[1]: Started cri-containerd-010c66a02acccafa303b88a380d288aaec9d662c7d1d1ef4defd02eee9eef761.scope - libcontainer container 010c66a02acccafa303b88a380d288aaec9d662c7d1d1ef4defd02eee9eef761. Apr 13 20:16:51.725805 containerd[1477]: time="2026-04-13T20:16:51.725744741Z" level=info msg="StartContainer for \"b4f83c06c140efad0d3f83c33a9a310144c8338855a79a859fdb6d357e093ed1\" returns successfully" Apr 13 20:16:51.764866 kubelet[2192]: I0413 20:16:51.763136 2192 kubelet_node_status.go:75] "Attempting to register node" node="172-234-25-54" Apr 13 20:16:51.764866 kubelet[2192]: E0413 20:16:51.763788 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.25.54:6443/api/v1/nodes\": dial tcp 172.234.25.54:6443: connect: connection refused" node="172-234-25-54" Apr 13 20:16:51.765001 containerd[1477]: time="2026-04-13T20:16:51.763401610Z" level=info msg="StartContainer for \"230e89804a40cf22fc95acbdfd0617d9515ea6294465164e10135bab396f8936\" returns successfully" Apr 13 20:16:51.771859 containerd[1477]: time="2026-04-13T20:16:51.771506644Z" level=info msg="StartContainer for \"010c66a02acccafa303b88a380d288aaec9d662c7d1d1ef4defd02eee9eef761\" returns successfully" Apr 13 20:16:52.250681 kubelet[2192]: E0413 20:16:52.250643 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:52.251131 kubelet[2192]: E0413 20:16:52.250786 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:52.255773 kubelet[2192]: E0413 20:16:52.255743 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:52.256860 kubelet[2192]: E0413 20:16:52.255919 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:52.260694 kubelet[2192]: E0413 20:16:52.260675 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:52.260902 kubelet[2192]: E0413 20:16:52.260883 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:52.452757 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:16:53.267067 kubelet[2192]: E0413 20:16:53.266666 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:53.268300 kubelet[2192]: E0413 20:16:53.267923 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:53.268300 kubelet[2192]: E0413 20:16:53.268059 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:53.268300 kubelet[2192]: E0413 20:16:53.268261 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:53.367301 kubelet[2192]: I0413 20:16:53.366899 2192 kubelet_node_status.go:75] "Attempting to register node" node="172-234-25-54" Apr 13 20:16:53.428769 kubelet[2192]: E0413 20:16:53.428695 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-25-54\" not found" node="172-234-25-54" Apr 13 20:16:53.604607 kubelet[2192]: I0413 20:16:53.604448 2192 kubelet_node_status.go:78] "Successfully registered node" node="172-234-25-54" Apr 13 20:16:53.604607 kubelet[2192]: E0413 20:16:53.604490 2192 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-25-54\": node \"172-234-25-54\" not found" Apr 13 20:16:53.617543 kubelet[2192]: E0413 20:16:53.617509 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-25-54\" not found" Apr 13 20:16:53.718213 kubelet[2192]: E0413 20:16:53.718157 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-25-54\" not found" Apr 13 20:16:53.818859 kubelet[2192]: E0413 20:16:53.818810 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-25-54\" not found" Apr 13 20:16:53.876510 kubelet[2192]: I0413 20:16:53.876208 2192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:53.881679 kubelet[2192]: E0413 20:16:53.881648 2192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-25-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:53.881679 kubelet[2192]: I0413 20:16:53.881677 2192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:53.883235 kubelet[2192]: E0413 20:16:53.883204 2192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-25-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:53.883235 kubelet[2192]: I0413 20:16:53.883229 2192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-25-54" Apr 13 20:16:53.884655 kubelet[2192]: E0413 20:16:53.884616 2192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-25-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-25-54" Apr 13 20:16:54.155180 kubelet[2192]: I0413 20:16:54.155053 2192 apiserver.go:52] "Watching apiserver" Apr 13 20:16:54.173721 kubelet[2192]: I0413 20:16:54.173688 2192 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:16:55.306303 systemd[1]: Reloading requested from client PID 2476 ('systemctl') (unit session-7.scope)... Apr 13 20:16:55.306324 systemd[1]: Reloading... Apr 13 20:16:55.414913 zram_generator::config[2517]: No configuration found. Apr 13 20:16:55.570995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:16:55.669029 systemd[1]: Reloading finished in 362 ms. Apr 13 20:16:55.720893 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:16:55.742528 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:16:55.742882 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:55.749082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:16:55.925021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:16:55.931532 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:16:55.964682 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:16:55.966303 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:16:55.966303 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:16:55.966303 kubelet[2567]: I0413 20:16:55.965158 2567 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:16:55.974256 kubelet[2567]: I0413 20:16:55.974234 2567 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:16:55.974582 kubelet[2567]: I0413 20:16:55.974560 2567 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:16:55.974791 kubelet[2567]: I0413 20:16:55.974766 2567 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:16:55.976340 kubelet[2567]: I0413 20:16:55.976318 2567 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:16:55.978509 kubelet[2567]: I0413 20:16:55.978225 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:16:55.980651 kubelet[2567]: E0413 20:16:55.980628 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:16:55.980651 kubelet[2567]: I0413 20:16:55.980651 2567 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:16:55.984734 kubelet[2567]: I0413 20:16:55.984716 2567 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:16:55.984980 kubelet[2567]: I0413 20:16:55.984960 2567 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:16:55.985115 kubelet[2567]: I0413 20:16:55.984980 2567 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-25-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:16:55.985222 kubelet[2567]: I0413 20:16:55.985119 2567 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:16:55.985222 kubelet[2567]: I0413 20:16:55.985128 2567 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:16:55.985222 kubelet[2567]: I0413 20:16:55.985171 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:16:55.985352 kubelet[2567]: I0413 20:16:55.985338 2567 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:16:55.985374 kubelet[2567]: I0413 20:16:55.985358 2567 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:16:55.985727 kubelet[2567]: I0413 20:16:55.985710 2567 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:16:55.985778 kubelet[2567]: I0413 20:16:55.985738 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:16:55.987451 kubelet[2567]: I0413 20:16:55.987429 2567 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:16:55.987907 kubelet[2567]: I0413 20:16:55.987819 2567 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:16:55.992719 kubelet[2567]: I0413 20:16:55.992697 2567 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:16:55.992768 kubelet[2567]: I0413 20:16:55.992740 2567 server.go:1289] "Started kubelet" Apr 13 20:16:55.996063 kubelet[2567]: I0413 20:16:55.995911 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:16:56.004443 kubelet[2567]: I0413 20:16:56.004100 2567 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:16:56.005048 kubelet[2567]: I0413 20:16:56.004958 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:16:56.005243 kubelet[2567]: I0413 20:16:56.005221 2567 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:16:56.009171 kubelet[2567]: I0413 20:16:56.009154 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:16:56.010515 kubelet[2567]: I0413 20:16:56.010433 2567 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:16:56.012594 kubelet[2567]: I0413 20:16:56.011355 2567 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:16:56.012594 kubelet[2567]: E0413 20:16:56.011523 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-25-54\" not found" Apr 13 20:16:56.012594 kubelet[2567]: I0413 20:16:56.011957 2567 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:16:56.012594 kubelet[2567]: I0413 20:16:56.012074 2567 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:16:56.024856 kubelet[2567]: I0413 20:16:56.024806 2567 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:16:56.031237 kubelet[2567]: I0413 20:16:56.028946 2567 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:16:56.034966 kubelet[2567]: I0413 20:16:56.026038 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:16:56.039649 kubelet[2567]: I0413 20:16:56.039597 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:16:56.039649 kubelet[2567]: I0413 20:16:56.039633 2567 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:16:56.039773 kubelet[2567]: I0413 20:16:56.039660 2567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:16:56.039773 kubelet[2567]: I0413 20:16:56.039667 2567 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:16:56.039773 kubelet[2567]: E0413 20:16:56.039719 2567 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:16:56.045263 kubelet[2567]: I0413 20:16:56.045246 2567 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:16:56.047170 kubelet[2567]: E0413 20:16:56.047148 2567 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:16:56.100255 kubelet[2567]: I0413 20:16:56.100230 2567 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:16:56.100255 kubelet[2567]: I0413 20:16:56.100247 2567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:16:56.100255 kubelet[2567]: I0413 20:16:56.100264 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:16:56.100448 kubelet[2567]: I0413 20:16:56.100389 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:16:56.100448 kubelet[2567]: I0413 20:16:56.100398 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:16:56.100448 kubelet[2567]: I0413 20:16:56.100414 2567 policy_none.go:49] "None policy: Start" Apr 13 20:16:56.100448 kubelet[2567]: I0413 20:16:56.100424 2567 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:16:56.100448 kubelet[2567]: I0413 20:16:56.100434 2567 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:16:56.100559 kubelet[2567]: I0413 20:16:56.100509 2567 state_mem.go:75] "Updated machine memory state" Apr 13 20:16:56.105625 kubelet[2567]: E0413 20:16:56.104947 2567 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:16:56.105625 kubelet[2567]: I0413 20:16:56.105100 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:16:56.105625 kubelet[2567]: I0413 20:16:56.105110 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:16:56.105625 kubelet[2567]: I0413 20:16:56.105460 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:16:56.108299 kubelet[2567]: E0413 20:16:56.106942 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:16:56.140646 kubelet[2567]: I0413 20:16:56.140623 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-25-54" Apr 13 20:16:56.141778 kubelet[2567]: I0413 20:16:56.141004 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:56.143091 kubelet[2567]: I0413 20:16:56.141115 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:56.208548 kubelet[2567]: I0413 20:16:56.207958 2567 kubelet_node_status.go:75] "Attempting to register node" node="172-234-25-54" Apr 13 20:16:56.213160 kubelet[2567]: I0413 20:16:56.213122 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-ca-certs\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:56.213260 kubelet[2567]: I0413 20:16:56.213176 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-flexvolume-dir\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:56.213260 kubelet[2567]: I0413 20:16:56.213196 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-k8s-certs\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:56.213260 kubelet[2567]: I0413 20:16:56.213214 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97671f7f349367fc70e98b1b21000d41-kubeconfig\") pod \"kube-scheduler-172-234-25-54\" (UID: \"97671f7f349367fc70e98b1b21000d41\") " pod="kube-system/kube-scheduler-172-234-25-54" Apr 13 20:16:56.213260 kubelet[2567]: I0413 20:16:56.213257 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-kubeconfig\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:56.213370 kubelet[2567]: I0413 20:16:56.213275 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d24b78a2663c44989d7ad534499d3db-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-25-54\" (UID: \"0d24b78a2663c44989d7ad534499d3db\") " pod="kube-system/kube-controller-manager-172-234-25-54" Apr 13 20:16:56.213370 kubelet[2567]: I0413 20:16:56.213292 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f779c34bcebe1d81caf1162b5529bb9f-ca-certs\") pod \"kube-apiserver-172-234-25-54\" (UID: \"f779c34bcebe1d81caf1162b5529bb9f\") " pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:56.213370 kubelet[2567]: I0413 20:16:56.213333 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f779c34bcebe1d81caf1162b5529bb9f-k8s-certs\") pod \"kube-apiserver-172-234-25-54\" (UID: \"f779c34bcebe1d81caf1162b5529bb9f\") " pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:56.213370 kubelet[2567]: I0413 20:16:56.213350 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f779c34bcebe1d81caf1162b5529bb9f-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-25-54\" (UID: \"f779c34bcebe1d81caf1162b5529bb9f\") " pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:56.214997 kubelet[2567]: I0413 20:16:56.214978 2567 kubelet_node_status.go:124] "Node was previously registered" node="172-234-25-54" Apr 13 20:16:56.215071 kubelet[2567]: I0413 20:16:56.215035 2567 kubelet_node_status.go:78] "Successfully registered node" node="172-234-25-54" Apr 13 20:16:56.448026 kubelet[2567]: E0413 20:16:56.447945 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:56.448149 kubelet[2567]: E0413 20:16:56.448105 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:56.448557 kubelet[2567]: E0413 20:16:56.448439 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:56.997141 kubelet[2567]: I0413 20:16:56.997090 2567 apiserver.go:52] "Watching apiserver" Apr 13 20:16:57.012115 kubelet[2567]: I0413 20:16:57.012081 2567 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:16:57.068909 kubelet[2567]: I0413 20:16:57.068843 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-25-54" Apr 13 20:16:57.069803 kubelet[2567]: E0413 20:16:57.069785 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:57.073234 kubelet[2567]: I0413 20:16:57.073216 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:57.081809 kubelet[2567]: E0413 20:16:57.081791 2567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-25-54\" already exists" pod="kube-system/kube-apiserver-172-234-25-54" Apr 13 20:16:57.082848 kubelet[2567]: E0413 20:16:57.082037 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:57.082848 kubelet[2567]: E0413 20:16:57.081902 2567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-25-54\" already exists" pod="kube-system/kube-scheduler-172-234-25-54" Apr 13 20:16:57.082848 kubelet[2567]: E0413 20:16:57.082326 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:57.097427 kubelet[2567]: I0413 20:16:57.097381 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-25-54" podStartSLOduration=1.097351406 podStartE2EDuration="1.097351406s" podCreationTimestamp="2026-04-13 20:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:16:57.080936568 +0000 UTC m=+1.144638604" watchObservedRunningTime="2026-04-13 20:16:57.097351406 +0000 UTC m=+1.161053452" Apr 13 20:16:57.097837 kubelet[2567]: I0413 20:16:57.097711 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-25-54" podStartSLOduration=1.097705172 podStartE2EDuration="1.097705172s" podCreationTimestamp="2026-04-13 20:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:16:57.096879228 +0000 UTC m=+1.160581274" watchObservedRunningTime="2026-04-13 20:16:57.097705172 +0000 UTC m=+1.161407218" Apr 13 20:16:57.108012 kubelet[2567]: I0413 20:16:57.107961 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-25-54" podStartSLOduration=1.107953655 podStartE2EDuration="1.107953655s" podCreationTimestamp="2026-04-13 20:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:16:57.107397396 +0000 UTC m=+1.171099442" watchObservedRunningTime="2026-04-13 20:16:57.107953655 +0000 UTC m=+1.171655701" Apr 13 20:16:58.070234 kubelet[2567]: E0413 20:16:58.070193 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:58.070997 kubelet[2567]: E0413 20:16:58.070814 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:59.620011 kubelet[2567]: E0413 20:16:59.619567 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:16:59.858368 kubelet[2567]: E0413 20:16:59.858302 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:00.654655 kubelet[2567]: I0413 20:17:00.654613 2567 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:17:00.655925 kubelet[2567]: I0413 20:17:00.655288 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:17:00.655990 containerd[1477]: time="2026-04-13T20:17:00.654978971Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:17:01.363421 systemd[1]: Created slice kubepods-besteffort-pod6e73f6d5_c1e8_489f_acb6_41a2902a2678.slice - libcontainer container kubepods-besteffort-pod6e73f6d5_c1e8_489f_acb6_41a2902a2678.slice. Apr 13 20:17:01.451633 kubelet[2567]: I0413 20:17:01.451564 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e73f6d5-c1e8-489f-acb6-41a2902a2678-lib-modules\") pod \"kube-proxy-kcpnr\" (UID: \"6e73f6d5-c1e8-489f-acb6-41a2902a2678\") " pod="kube-system/kube-proxy-kcpnr" Apr 13 20:17:01.451633 kubelet[2567]: I0413 20:17:01.451614 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgn87\" (UniqueName: \"kubernetes.io/projected/6e73f6d5-c1e8-489f-acb6-41a2902a2678-kube-api-access-dgn87\") pod \"kube-proxy-kcpnr\" (UID: \"6e73f6d5-c1e8-489f-acb6-41a2902a2678\") " pod="kube-system/kube-proxy-kcpnr" Apr 13 20:17:01.451814 kubelet[2567]: I0413 20:17:01.451657 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e73f6d5-c1e8-489f-acb6-41a2902a2678-kube-proxy\") pod \"kube-proxy-kcpnr\" (UID: \"6e73f6d5-c1e8-489f-acb6-41a2902a2678\") " pod="kube-system/kube-proxy-kcpnr" Apr 13 20:17:01.451814 kubelet[2567]: I0413 20:17:01.451683 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e73f6d5-c1e8-489f-acb6-41a2902a2678-xtables-lock\") pod \"kube-proxy-kcpnr\" (UID: \"6e73f6d5-c1e8-489f-acb6-41a2902a2678\") " pod="kube-system/kube-proxy-kcpnr" Apr 13 20:17:01.557084 kubelet[2567]: E0413 20:17:01.557048 2567 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 20:17:01.557084 kubelet[2567]: E0413 20:17:01.557074 2567 projected.go:194] Error preparing data for projected volume kube-api-access-dgn87 for pod kube-system/kube-proxy-kcpnr: configmap "kube-root-ca.crt" not found Apr 13 20:17:01.557351 kubelet[2567]: E0413 20:17:01.557124 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e73f6d5-c1e8-489f-acb6-41a2902a2678-kube-api-access-dgn87 podName:6e73f6d5-c1e8-489f-acb6-41a2902a2678 nodeName:}" failed. No retries permitted until 2026-04-13 20:17:02.057107707 +0000 UTC m=+6.120809743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dgn87" (UniqueName: "kubernetes.io/projected/6e73f6d5-c1e8-489f-acb6-41a2902a2678-kube-api-access-dgn87") pod "kube-proxy-kcpnr" (UID: "6e73f6d5-c1e8-489f-acb6-41a2902a2678") : configmap "kube-root-ca.crt" not found Apr 13 20:17:01.845082 systemd[1]: Created slice kubepods-besteffort-pod656f3d7e_a915_4793_9c93_7adaf08b5883.slice - libcontainer container kubepods-besteffort-pod656f3d7e_a915_4793_9c93_7adaf08b5883.slice. Apr 13 20:17:01.856849 kubelet[2567]: I0413 20:17:01.854909 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/656f3d7e-a915-4793-9c93-7adaf08b5883-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-v9xtj\" (UID: \"656f3d7e-a915-4793-9c93-7adaf08b5883\") " pod="tigera-operator/tigera-operator-6bf85f8dd-v9xtj" Apr 13 20:17:01.856849 kubelet[2567]: I0413 20:17:01.854942 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ngnt\" (UniqueName: \"kubernetes.io/projected/656f3d7e-a915-4793-9c93-7adaf08b5883-kube-api-access-5ngnt\") pod \"tigera-operator-6bf85f8dd-v9xtj\" (UID: \"656f3d7e-a915-4793-9c93-7adaf08b5883\") " pod="tigera-operator/tigera-operator-6bf85f8dd-v9xtj" Apr 13 20:17:02.150389 containerd[1477]: time="2026-04-13T20:17:02.150246762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-v9xtj,Uid:656f3d7e-a915-4793-9c93-7adaf08b5883,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:17:02.182146 containerd[1477]: time="2026-04-13T20:17:02.181986106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:02.182775 containerd[1477]: time="2026-04-13T20:17:02.182128847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:02.182775 containerd[1477]: time="2026-04-13T20:17:02.182153678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:02.182775 containerd[1477]: time="2026-04-13T20:17:02.182327520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:02.221254 systemd[1]: Started cri-containerd-ad05676f4906c94b5c5bbbee5256c71d110bb9caf0305ba90c2a395aec5975fa.scope - libcontainer container ad05676f4906c94b5c5bbbee5256c71d110bb9caf0305ba90c2a395aec5975fa. Apr 13 20:17:02.260380 containerd[1477]: time="2026-04-13T20:17:02.260324418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-v9xtj,Uid:656f3d7e-a915-4793-9c93-7adaf08b5883,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ad05676f4906c94b5c5bbbee5256c71d110bb9caf0305ba90c2a395aec5975fa\"" Apr 13 20:17:02.263448 containerd[1477]: time="2026-04-13T20:17:02.263422596Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:17:02.275522 kubelet[2567]: E0413 20:17:02.275469 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:02.277554 containerd[1477]: time="2026-04-13T20:17:02.277221837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcpnr,Uid:6e73f6d5-c1e8-489f-acb6-41a2902a2678,Namespace:kube-system,Attempt:0,}" Apr 13 20:17:02.297943 containerd[1477]: time="2026-04-13T20:17:02.297616370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:02.297943 containerd[1477]: time="2026-04-13T20:17:02.297708962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:02.297943 containerd[1477]: time="2026-04-13T20:17:02.297731512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:02.297943 containerd[1477]: time="2026-04-13T20:17:02.297899484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:02.319978 systemd[1]: Started cri-containerd-3e3efcc21b69d0de71ee5ca033b68b511d4ad274f0de9b5aa461439aab9f5252.scope - libcontainer container 3e3efcc21b69d0de71ee5ca033b68b511d4ad274f0de9b5aa461439aab9f5252. Apr 13 20:17:02.344537 containerd[1477]: time="2026-04-13T20:17:02.344484552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcpnr,Uid:6e73f6d5-c1e8-489f-acb6-41a2902a2678,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e3efcc21b69d0de71ee5ca033b68b511d4ad274f0de9b5aa461439aab9f5252\"" Apr 13 20:17:02.345146 kubelet[2567]: E0413 20:17:02.345127 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:02.349303 containerd[1477]: time="2026-04-13T20:17:02.349277231Z" level=info msg="CreateContainer within sandbox \"3e3efcc21b69d0de71ee5ca033b68b511d4ad274f0de9b5aa461439aab9f5252\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:17:02.361844 containerd[1477]: time="2026-04-13T20:17:02.361721787Z" level=info msg="CreateContainer within sandbox \"3e3efcc21b69d0de71ee5ca033b68b511d4ad274f0de9b5aa461439aab9f5252\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4839882f8eecf8eda2e967f9518e68090e575934b0404266962425a671b73e6\"" Apr 13 20:17:02.362751 containerd[1477]: time="2026-04-13T20:17:02.362700848Z" level=info msg="StartContainer for \"c4839882f8eecf8eda2e967f9518e68090e575934b0404266962425a671b73e6\"" Apr 13 20:17:02.400006 systemd[1]: Started cri-containerd-c4839882f8eecf8eda2e967f9518e68090e575934b0404266962425a671b73e6.scope - libcontainer container c4839882f8eecf8eda2e967f9518e68090e575934b0404266962425a671b73e6. Apr 13 20:17:02.434508 containerd[1477]: time="2026-04-13T20:17:02.434299057Z" level=info msg="StartContainer for \"c4839882f8eecf8eda2e967f9518e68090e575934b0404266962425a671b73e6\" returns successfully" Apr 13 20:17:02.971326 systemd[1]: run-containerd-runc-k8s.io-ad05676f4906c94b5c5bbbee5256c71d110bb9caf0305ba90c2a395aec5975fa-runc.iGOgvu.mount: Deactivated successfully. Apr 13 20:17:03.080012 kubelet[2567]: E0413 20:17:03.079206 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:03.444879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201184985.mount: Deactivated successfully. Apr 13 20:17:05.048050 containerd[1477]: time="2026-04-13T20:17:05.047993023Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:05.049212 containerd[1477]: time="2026-04-13T20:17:05.049023103Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:17:05.049820 containerd[1477]: time="2026-04-13T20:17:05.049661701Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:05.051594 containerd[1477]: time="2026-04-13T20:17:05.051556699Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:05.052417 containerd[1477]: time="2026-04-13T20:17:05.052264917Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.788686539s" Apr 13 20:17:05.052417 containerd[1477]: time="2026-04-13T20:17:05.052300838Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:17:05.056189 containerd[1477]: time="2026-04-13T20:17:05.056163217Z" level=info msg="CreateContainer within sandbox \"ad05676f4906c94b5c5bbbee5256c71d110bb9caf0305ba90c2a395aec5975fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:17:05.070073 containerd[1477]: time="2026-04-13T20:17:05.070023141Z" level=info msg="CreateContainer within sandbox \"ad05676f4906c94b5c5bbbee5256c71d110bb9caf0305ba90c2a395aec5975fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e30513eaded641c30aa9828c3cb7bf1ce788e1227f571deeb29b7102dc667001\"" Apr 13 20:17:05.070708 containerd[1477]: time="2026-04-13T20:17:05.070570646Z" level=info msg="StartContainer for \"e30513eaded641c30aa9828c3cb7bf1ce788e1227f571deeb29b7102dc667001\"" Apr 13 20:17:05.110421 systemd[1]: run-containerd-runc-k8s.io-e30513eaded641c30aa9828c3cb7bf1ce788e1227f571deeb29b7102dc667001-runc.LMfCSf.mount: Deactivated successfully. Apr 13 20:17:05.116980 systemd[1]: Started cri-containerd-e30513eaded641c30aa9828c3cb7bf1ce788e1227f571deeb29b7102dc667001.scope - libcontainer container e30513eaded641c30aa9828c3cb7bf1ce788e1227f571deeb29b7102dc667001. Apr 13 20:17:05.144173 containerd[1477]: time="2026-04-13T20:17:05.144124865Z" level=info msg="StartContainer for \"e30513eaded641c30aa9828c3cb7bf1ce788e1227f571deeb29b7102dc667001\" returns successfully" Apr 13 20:17:06.100904 kubelet[2567]: I0413 20:17:06.100328 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kcpnr" podStartSLOduration=5.100312816 podStartE2EDuration="5.100312816s" podCreationTimestamp="2026-04-13 20:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:17:03.087797724 +0000 UTC m=+7.151499760" watchObservedRunningTime="2026-04-13 20:17:06.100312816 +0000 UTC m=+10.164014852" Apr 13 20:17:06.100904 kubelet[2567]: I0413 20:17:06.100420 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-v9xtj" podStartSLOduration=2.308969697 podStartE2EDuration="5.100401437s" podCreationTimestamp="2026-04-13 20:17:01 +0000 UTC" firstStartedPulling="2026-04-13 20:17:02.261721136 +0000 UTC m=+6.325423182" lastFinishedPulling="2026-04-13 20:17:05.053152886 +0000 UTC m=+9.116854922" observedRunningTime="2026-04-13 20:17:06.100256205 +0000 UTC m=+10.163958241" watchObservedRunningTime="2026-04-13 20:17:06.100401437 +0000 UTC m=+10.164103473" Apr 13 20:17:07.064797 kubelet[2567]: E0413 20:17:07.064718 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:07.407123 update_engine[1462]: I20260413 20:17:07.406916 1462 update_attempter.cc:509] Updating boot flags... Apr 13 20:17:07.510858 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2923) Apr 13 20:17:07.676041 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2926) Apr 13 20:17:07.805877 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2926) Apr 13 20:17:09.623333 kubelet[2567]: E0413 20:17:09.623286 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:09.865956 kubelet[2567]: E0413 20:17:09.865708 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:10.093308 kubelet[2567]: E0413 20:17:10.093264 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:10.907144 sudo[1703]: pam_unix(sudo:session): session closed for user root Apr 13 20:17:11.024182 sshd[1685]: pam_unix(sshd:session): session closed for user core Apr 13 20:17:11.028289 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:17:11.028889 systemd[1]: sshd@6-172.234.25.54:22-50.85.169.122:46980.service: Deactivated successfully. Apr 13 20:17:11.034047 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:17:11.034987 systemd[1]: session-7.scope: Consumed 5.199s CPU time, 157.4M memory peak, 0B memory swap peak. Apr 13 20:17:11.039027 systemd-logind[1458]: Removed session 7. Apr 13 20:17:13.749793 systemd[1]: Created slice kubepods-besteffort-pod0455dd14_9dcc_43d9_9d4a_9751e9707fd8.slice - libcontainer container kubepods-besteffort-pod0455dd14_9dcc_43d9_9d4a_9751e9707fd8.slice. Apr 13 20:17:13.829278 kubelet[2567]: I0413 20:17:13.829234 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0455dd14-9dcc-43d9-9d4a-9751e9707fd8-typha-certs\") pod \"calico-typha-8485f6458f-v7g8d\" (UID: \"0455dd14-9dcc-43d9-9d4a-9751e9707fd8\") " pod="calico-system/calico-typha-8485f6458f-v7g8d" Apr 13 20:17:13.829278 kubelet[2567]: I0413 20:17:13.829276 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0455dd14-9dcc-43d9-9d4a-9751e9707fd8-tigera-ca-bundle\") pod \"calico-typha-8485f6458f-v7g8d\" (UID: \"0455dd14-9dcc-43d9-9d4a-9751e9707fd8\") " pod="calico-system/calico-typha-8485f6458f-v7g8d" Apr 13 20:17:13.829278 kubelet[2567]: I0413 20:17:13.829294 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntq5\" (UniqueName: \"kubernetes.io/projected/0455dd14-9dcc-43d9-9d4a-9751e9707fd8-kube-api-access-5ntq5\") pod \"calico-typha-8485f6458f-v7g8d\" (UID: \"0455dd14-9dcc-43d9-9d4a-9751e9707fd8\") " pod="calico-system/calico-typha-8485f6458f-v7g8d" Apr 13 20:17:13.882589 systemd[1]: Created slice kubepods-besteffort-podfcee5df0_6791_4c40_a408_be5a4300cb48.slice - libcontainer container kubepods-besteffort-podfcee5df0_6791_4c40_a408_be5a4300cb48.slice. Apr 13 20:17:13.929995 kubelet[2567]: I0413 20:17:13.929947 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-lib-modules\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.929995 kubelet[2567]: I0413 20:17:13.929984 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-xtables-lock\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.929995 kubelet[2567]: I0413 20:17:13.930000 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-sys-fs\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930199 kubelet[2567]: I0413 20:17:13.930025 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-cni-log-dir\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930199 kubelet[2567]: I0413 20:17:13.930040 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-var-lib-calico\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930199 kubelet[2567]: I0413 20:17:13.930053 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcee5df0-6791-4c40-a408-be5a4300cb48-tigera-ca-bundle\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930199 kubelet[2567]: I0413 20:17:13.930066 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-cni-bin-dir\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930199 kubelet[2567]: I0413 20:17:13.930078 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-var-run-calico\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930321 kubelet[2567]: I0413 20:17:13.930094 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-bpffs\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930321 kubelet[2567]: I0413 20:17:13.930106 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-cni-net-dir\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930321 kubelet[2567]: I0413 20:17:13.930120 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-flexvol-driver-host\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930321 kubelet[2567]: I0413 20:17:13.930134 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-nodeproc\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930321 kubelet[2567]: I0413 20:17:13.930160 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fcee5df0-6791-4c40-a408-be5a4300cb48-policysync\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930437 kubelet[2567]: I0413 20:17:13.930174 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fcee5df0-6791-4c40-a408-be5a4300cb48-node-certs\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.930437 kubelet[2567]: I0413 20:17:13.930187 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66zhp\" (UniqueName: \"kubernetes.io/projected/fcee5df0-6791-4c40-a408-be5a4300cb48-kube-api-access-66zhp\") pod \"calico-node-t7k6n\" (UID: \"fcee5df0-6791-4c40-a408-be5a4300cb48\") " pod="calico-system/calico-node-t7k6n" Apr 13 20:17:13.993322 kubelet[2567]: E0413 20:17:13.993267 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbkt6" podUID="71160270-58d3-403b-af73-5d23d46c4986" Apr 13 20:17:14.031335 kubelet[2567]: I0413 20:17:14.031216 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71160270-58d3-403b-af73-5d23d46c4986-socket-dir\") pod \"csi-node-driver-xbkt6\" (UID: \"71160270-58d3-403b-af73-5d23d46c4986\") " pod="calico-system/csi-node-driver-xbkt6" Apr 13 20:17:14.031886 kubelet[2567]: I0413 20:17:14.031316 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71160270-58d3-403b-af73-5d23d46c4986-varrun\") pod \"csi-node-driver-xbkt6\" (UID: \"71160270-58d3-403b-af73-5d23d46c4986\") " pod="calico-system/csi-node-driver-xbkt6" Apr 13 20:17:14.031942 kubelet[2567]: I0413 20:17:14.031919 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5fm\" (UniqueName: \"kubernetes.io/projected/71160270-58d3-403b-af73-5d23d46c4986-kube-api-access-2q5fm\") pod \"csi-node-driver-xbkt6\" (UID: \"71160270-58d3-403b-af73-5d23d46c4986\") " pod="calico-system/csi-node-driver-xbkt6" Apr 13 20:17:14.031992 kubelet[2567]: I0413 20:17:14.031970 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71160270-58d3-403b-af73-5d23d46c4986-kubelet-dir\") pod \"csi-node-driver-xbkt6\" (UID: \"71160270-58d3-403b-af73-5d23d46c4986\") " pod="calico-system/csi-node-driver-xbkt6" Apr 13 20:17:14.032023 kubelet[2567]: I0413 20:17:14.031993 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71160270-58d3-403b-af73-5d23d46c4986-registration-dir\") pod \"csi-node-driver-xbkt6\" (UID: \"71160270-58d3-403b-af73-5d23d46c4986\") " pod="calico-system/csi-node-driver-xbkt6" Apr 13 20:17:14.035498 kubelet[2567]: E0413 20:17:14.035472 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.035723 kubelet[2567]: W0413 20:17:14.035571 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.035723 kubelet[2567]: E0413 20:17:14.035600 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.035965 kubelet[2567]: E0413 20:17:14.035953 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.036019 kubelet[2567]: W0413 20:17:14.036008 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.036074 kubelet[2567]: E0413 20:17:14.036064 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.036722 kubelet[2567]: E0413 20:17:14.036709 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.036796 kubelet[2567]: W0413 20:17:14.036783 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.036952 kubelet[2567]: E0413 20:17:14.036890 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.038341 kubelet[2567]: E0413 20:17:14.038286 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.038341 kubelet[2567]: W0413 20:17:14.038315 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.038341 kubelet[2567]: E0413 20:17:14.038326 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.040414 kubelet[2567]: E0413 20:17:14.038876 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.040414 kubelet[2567]: W0413 20:17:14.038888 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.040414 kubelet[2567]: E0413 20:17:14.038898 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.044943 kubelet[2567]: E0413 20:17:14.044516 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.044943 kubelet[2567]: W0413 20:17:14.044724 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.044943 kubelet[2567]: E0413 20:17:14.044743 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.046154 kubelet[2567]: E0413 20:17:14.046131 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.046207 kubelet[2567]: W0413 20:17:14.046148 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.046207 kubelet[2567]: E0413 20:17:14.046176 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.047067 kubelet[2567]: E0413 20:17:14.046745 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.047067 kubelet[2567]: W0413 20:17:14.046758 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.047067 kubelet[2567]: E0413 20:17:14.046767 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.058054 kubelet[2567]: E0413 20:17:14.058025 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:14.058688 containerd[1477]: time="2026-04-13T20:17:14.058652792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8485f6458f-v7g8d,Uid:0455dd14-9dcc-43d9-9d4a-9751e9707fd8,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:14.074164 kubelet[2567]: E0413 20:17:14.073144 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.074164 kubelet[2567]: W0413 20:17:14.073167 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.074164 kubelet[2567]: E0413 20:17:14.073190 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.096926 containerd[1477]: time="2026-04-13T20:17:14.096370668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:14.096926 containerd[1477]: time="2026-04-13T20:17:14.096885091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:14.097160 containerd[1477]: time="2026-04-13T20:17:14.096918671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:14.097275 containerd[1477]: time="2026-04-13T20:17:14.097197452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:14.120986 systemd[1]: Started cri-containerd-a55016553477123492aae5d045faf554c2dcf7f5bcf888f6578263419afba2a1.scope - libcontainer container a55016553477123492aae5d045faf554c2dcf7f5bcf888f6578263419afba2a1. Apr 13 20:17:14.133323 kubelet[2567]: E0413 20:17:14.133287 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.133323 kubelet[2567]: W0413 20:17:14.133311 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.133441 kubelet[2567]: E0413 20:17:14.133350 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.133862 kubelet[2567]: E0413 20:17:14.133626 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.133862 kubelet[2567]: W0413 20:17:14.133680 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.133862 kubelet[2567]: E0413 20:17:14.133691 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.134192 kubelet[2567]: E0413 20:17:14.134037 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.134192 kubelet[2567]: W0413 20:17:14.134045 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.134192 kubelet[2567]: E0413 20:17:14.134054 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.135248 kubelet[2567]: E0413 20:17:14.134435 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.135248 kubelet[2567]: W0413 20:17:14.134479 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.135248 kubelet[2567]: E0413 20:17:14.134490 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.135348 kubelet[2567]: E0413 20:17:14.135268 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.135348 kubelet[2567]: W0413 20:17:14.135277 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.135348 kubelet[2567]: E0413 20:17:14.135286 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.135924 kubelet[2567]: E0413 20:17:14.135900 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.135982 kubelet[2567]: W0413 20:17:14.135916 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.135982 kubelet[2567]: E0413 20:17:14.135972 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.136407 kubelet[2567]: E0413 20:17:14.136371 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.136407 kubelet[2567]: W0413 20:17:14.136399 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.136407 kubelet[2567]: E0413 20:17:14.136408 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.137121 kubelet[2567]: E0413 20:17:14.136970 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.137121 kubelet[2567]: W0413 20:17:14.136982 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.137121 kubelet[2567]: E0413 20:17:14.136991 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.137315 kubelet[2567]: E0413 20:17:14.137280 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.137315 kubelet[2567]: W0413 20:17:14.137293 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.137315 kubelet[2567]: E0413 20:17:14.137301 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.137672 kubelet[2567]: E0413 20:17:14.137601 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.137672 kubelet[2567]: W0413 20:17:14.137613 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.137672 kubelet[2567]: E0413 20:17:14.137632 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.139849 kubelet[2567]: E0413 20:17:14.138016 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.139849 kubelet[2567]: W0413 20:17:14.138028 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.139849 kubelet[2567]: E0413 20:17:14.138036 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.139849 kubelet[2567]: E0413 20:17:14.138263 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.139849 kubelet[2567]: W0413 20:17:14.138271 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.139849 kubelet[2567]: E0413 20:17:14.138280 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.139849 kubelet[2567]: E0413 20:17:14.138578 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.139849 kubelet[2567]: W0413 20:17:14.138586 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.139849 kubelet[2567]: E0413 20:17:14.138594 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.139849 kubelet[2567]: E0413 20:17:14.138870 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.140067 kubelet[2567]: W0413 20:17:14.138880 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.140067 kubelet[2567]: E0413 20:17:14.138888 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.140067 kubelet[2567]: E0413 20:17:14.139169 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.140067 kubelet[2567]: W0413 20:17:14.139178 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.140067 kubelet[2567]: E0413 20:17:14.139186 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.140067 kubelet[2567]: E0413 20:17:14.139428 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.140067 kubelet[2567]: W0413 20:17:14.139436 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.140067 kubelet[2567]: E0413 20:17:14.139444 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.140067 kubelet[2567]: E0413 20:17:14.139667 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.140067 kubelet[2567]: W0413 20:17:14.139675 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.140271 kubelet[2567]: E0413 20:17:14.139683 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.140271 kubelet[2567]: E0413 20:17:14.139919 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.140271 kubelet[2567]: W0413 20:17:14.139927 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.140271 kubelet[2567]: E0413 20:17:14.139935 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.140377 kubelet[2567]: E0413 20:17:14.140287 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.140377 kubelet[2567]: W0413 20:17:14.140296 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.140377 kubelet[2567]: E0413 20:17:14.140303 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.140778 kubelet[2567]: E0413 20:17:14.140531 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.140778 kubelet[2567]: W0413 20:17:14.140757 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.140778 kubelet[2567]: E0413 20:17:14.140765 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.141114 kubelet[2567]: E0413 20:17:14.141031 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.141114 kubelet[2567]: W0413 20:17:14.141089 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.141114 kubelet[2567]: E0413 20:17:14.141099 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.141511 kubelet[2567]: E0413 20:17:14.141362 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.141511 kubelet[2567]: W0413 20:17:14.141375 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.141511 kubelet[2567]: E0413 20:17:14.141384 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.141702 kubelet[2567]: E0413 20:17:14.141608 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.141702 kubelet[2567]: W0413 20:17:14.141619 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.141702 kubelet[2567]: E0413 20:17:14.141627 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.141920 kubelet[2567]: E0413 20:17:14.141885 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.141920 kubelet[2567]: W0413 20:17:14.141893 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.141920 kubelet[2567]: E0413 20:17:14.141900 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.142235 kubelet[2567]: E0413 20:17:14.142131 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.142235 kubelet[2567]: W0413 20:17:14.142143 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.142235 kubelet[2567]: E0413 20:17:14.142150 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.150919 kubelet[2567]: E0413 20:17:14.150891 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:14.150919 kubelet[2567]: W0413 20:17:14.150908 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:14.150919 kubelet[2567]: E0413 20:17:14.150918 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:14.179737 containerd[1477]: time="2026-04-13T20:17:14.179690867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8485f6458f-v7g8d,Uid:0455dd14-9dcc-43d9-9d4a-9751e9707fd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"a55016553477123492aae5d045faf554c2dcf7f5bcf888f6578263419afba2a1\"" Apr 13 20:17:14.180849 kubelet[2567]: E0413 20:17:14.180630 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:14.182431 containerd[1477]: time="2026-04-13T20:17:14.182339853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:17:14.202268 containerd[1477]: time="2026-04-13T20:17:14.202063891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t7k6n,Uid:fcee5df0-6791-4c40-a408-be5a4300cb48,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:14.244867 containerd[1477]: time="2026-04-13T20:17:14.244342035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:14.244867 containerd[1477]: time="2026-04-13T20:17:14.244404365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:14.244867 containerd[1477]: time="2026-04-13T20:17:14.244607076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:14.244867 containerd[1477]: time="2026-04-13T20:17:14.244721667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:14.268016 systemd[1]: Started cri-containerd-39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2.scope - libcontainer container 39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2. Apr 13 20:17:14.298819 containerd[1477]: time="2026-04-13T20:17:14.298727590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t7k6n,Uid:fcee5df0-6791-4c40-a408-be5a4300cb48,Namespace:calico-system,Attempt:0,} returns sandbox id \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\"" Apr 13 20:17:15.049778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506680710.mount: Deactivated successfully. Apr 13 20:17:15.663665 containerd[1477]: time="2026-04-13T20:17:15.663420848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:15.665098 containerd[1477]: time="2026-04-13T20:17:15.664975056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:17:15.666846 containerd[1477]: time="2026-04-13T20:17:15.665680001Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:15.667755 containerd[1477]: time="2026-04-13T20:17:15.667490761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:15.668462 containerd[1477]: time="2026-04-13T20:17:15.668432586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.486061193s" Apr 13 20:17:15.668506 containerd[1477]: time="2026-04-13T20:17:15.668463726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:17:15.670727 containerd[1477]: time="2026-04-13T20:17:15.670708139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:17:15.688932 containerd[1477]: time="2026-04-13T20:17:15.688896061Z" level=info msg="CreateContainer within sandbox \"a55016553477123492aae5d045faf554c2dcf7f5bcf888f6578263419afba2a1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:17:15.696867 containerd[1477]: time="2026-04-13T20:17:15.696818536Z" level=info msg="CreateContainer within sandbox \"a55016553477123492aae5d045faf554c2dcf7f5bcf888f6578263419afba2a1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6542f7769e408e91dab6ed2f23869760030ebfcde4ccda7286136b90b1903861\"" Apr 13 20:17:15.698766 containerd[1477]: time="2026-04-13T20:17:15.697175689Z" level=info msg="StartContainer for \"6542f7769e408e91dab6ed2f23869760030ebfcde4ccda7286136b90b1903861\"" Apr 13 20:17:15.728726 systemd[1]: Started cri-containerd-6542f7769e408e91dab6ed2f23869760030ebfcde4ccda7286136b90b1903861.scope - libcontainer container 6542f7769e408e91dab6ed2f23869760030ebfcde4ccda7286136b90b1903861. Apr 13 20:17:15.775463 containerd[1477]: time="2026-04-13T20:17:15.775427211Z" level=info msg="StartContainer for \"6542f7769e408e91dab6ed2f23869760030ebfcde4ccda7286136b90b1903861\" returns successfully" Apr 13 20:17:16.041891 kubelet[2567]: E0413 20:17:16.041278 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbkt6" podUID="71160270-58d3-403b-af73-5d23d46c4986" Apr 13 20:17:16.117528 kubelet[2567]: E0413 20:17:16.117490 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:16.130286 kubelet[2567]: E0413 20:17:16.130258 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.130286 kubelet[2567]: W0413 20:17:16.130279 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.130286 kubelet[2567]: E0413 20:17:16.130295 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.130726 kubelet[2567]: E0413 20:17:16.130705 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.130726 kubelet[2567]: W0413 20:17:16.130720 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.130822 kubelet[2567]: E0413 20:17:16.130731 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.130996 kubelet[2567]: E0413 20:17:16.130965 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.130996 kubelet[2567]: W0413 20:17:16.130977 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.130996 kubelet[2567]: E0413 20:17:16.130986 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.131235 kubelet[2567]: E0413 20:17:16.131224 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.131235 kubelet[2567]: W0413 20:17:16.131234 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.131309 kubelet[2567]: E0413 20:17:16.131242 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.131457 kubelet[2567]: E0413 20:17:16.131442 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.131457 kubelet[2567]: W0413 20:17:16.131454 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.131528 kubelet[2567]: E0413 20:17:16.131463 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.131891 kubelet[2567]: E0413 20:17:16.131876 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.131891 kubelet[2567]: W0413 20:17:16.131888 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.131963 kubelet[2567]: E0413 20:17:16.131896 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.132105 kubelet[2567]: E0413 20:17:16.132091 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.132105 kubelet[2567]: W0413 20:17:16.132102 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.132168 kubelet[2567]: E0413 20:17:16.132111 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.132331 kubelet[2567]: E0413 20:17:16.132312 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.132331 kubelet[2567]: W0413 20:17:16.132323 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.132331 kubelet[2567]: E0413 20:17:16.132331 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.132719 kubelet[2567]: E0413 20:17:16.132709 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.132719 kubelet[2567]: W0413 20:17:16.132718 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.132772 kubelet[2567]: E0413 20:17:16.132726 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.132969 kubelet[2567]: E0413 20:17:16.132954 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.132969 kubelet[2567]: W0413 20:17:16.132966 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.133040 kubelet[2567]: E0413 20:17:16.132974 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.133192 kubelet[2567]: E0413 20:17:16.133165 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.133192 kubelet[2567]: W0413 20:17:16.133178 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.133192 kubelet[2567]: E0413 20:17:16.133187 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.133896 kubelet[2567]: E0413 20:17:16.133385 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.133896 kubelet[2567]: W0413 20:17:16.133393 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.133896 kubelet[2567]: E0413 20:17:16.133400 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.133896 kubelet[2567]: E0413 20:17:16.133642 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.133896 kubelet[2567]: W0413 20:17:16.133869 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.133896 kubelet[2567]: E0413 20:17:16.133877 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.134174 kubelet[2567]: E0413 20:17:16.134070 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.134174 kubelet[2567]: W0413 20:17:16.134077 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.134174 kubelet[2567]: E0413 20:17:16.134084 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.134296 kubelet[2567]: E0413 20:17:16.134284 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.134296 kubelet[2567]: W0413 20:17:16.134292 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.134334 kubelet[2567]: E0413 20:17:16.134299 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.140238 kubelet[2567]: I0413 20:17:16.140184 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8485f6458f-v7g8d" podStartSLOduration=1.652404633 podStartE2EDuration="3.140171326s" podCreationTimestamp="2026-04-13 20:17:13 +0000 UTC" firstStartedPulling="2026-04-13 20:17:14.181989611 +0000 UTC m=+18.245691647" lastFinishedPulling="2026-04-13 20:17:15.669756304 +0000 UTC m=+19.733458340" observedRunningTime="2026-04-13 20:17:16.139906114 +0000 UTC m=+20.203608150" watchObservedRunningTime="2026-04-13 20:17:16.140171326 +0000 UTC m=+20.203873372" Apr 13 20:17:16.149339 kubelet[2567]: E0413 20:17:16.149318 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.149339 kubelet[2567]: W0413 20:17:16.149333 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.149339 kubelet[2567]: E0413 20:17:16.149346 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.149584 kubelet[2567]: E0413 20:17:16.149572 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.149584 kubelet[2567]: W0413 20:17:16.149584 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.149699 kubelet[2567]: E0413 20:17:16.149593 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.149875 kubelet[2567]: E0413 20:17:16.149860 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.149914 kubelet[2567]: W0413 20:17:16.149875 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.149914 kubelet[2567]: E0413 20:17:16.149883 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.150188 kubelet[2567]: E0413 20:17:16.150153 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.150188 kubelet[2567]: W0413 20:17:16.150165 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.150188 kubelet[2567]: E0413 20:17:16.150173 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.150422 kubelet[2567]: E0413 20:17:16.150409 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.150422 kubelet[2567]: W0413 20:17:16.150419 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.150504 kubelet[2567]: E0413 20:17:16.150426 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.150879 kubelet[2567]: E0413 20:17:16.150656 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.150919 kubelet[2567]: W0413 20:17:16.150875 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.150919 kubelet[2567]: E0413 20:17:16.150904 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.151182 kubelet[2567]: E0413 20:17:16.151160 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.151182 kubelet[2567]: W0413 20:17:16.151174 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.151317 kubelet[2567]: E0413 20:17:16.151194 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.151919 kubelet[2567]: E0413 20:17:16.151899 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.151919 kubelet[2567]: W0413 20:17:16.151912 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.152043 kubelet[2567]: E0413 20:17:16.151923 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.152228 kubelet[2567]: E0413 20:17:16.152173 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.152228 kubelet[2567]: W0413 20:17:16.152185 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.152228 kubelet[2567]: E0413 20:17:16.152194 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.152438 kubelet[2567]: E0413 20:17:16.152425 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.152438 kubelet[2567]: W0413 20:17:16.152437 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.152498 kubelet[2567]: E0413 20:17:16.152445 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.152982 kubelet[2567]: E0413 20:17:16.152965 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.152982 kubelet[2567]: W0413 20:17:16.152976 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.152982 kubelet[2567]: E0413 20:17:16.152985 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.153233 kubelet[2567]: E0413 20:17:16.153221 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.153233 kubelet[2567]: W0413 20:17:16.153232 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.153307 kubelet[2567]: E0413 20:17:16.153240 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.153732 kubelet[2567]: E0413 20:17:16.153707 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.153732 kubelet[2567]: W0413 20:17:16.153719 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.153732 kubelet[2567]: E0413 20:17:16.153728 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.155006 kubelet[2567]: E0413 20:17:16.154890 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.155006 kubelet[2567]: W0413 20:17:16.154903 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.155006 kubelet[2567]: E0413 20:17:16.154914 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.156084 kubelet[2567]: E0413 20:17:16.155973 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.156084 kubelet[2567]: W0413 20:17:16.155985 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.156084 kubelet[2567]: E0413 20:17:16.155993 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.157372 kubelet[2567]: E0413 20:17:16.156787 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.157372 kubelet[2567]: W0413 20:17:16.156798 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.157372 kubelet[2567]: E0413 20:17:16.156807 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.157372 kubelet[2567]: E0413 20:17:16.157102 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.157372 kubelet[2567]: W0413 20:17:16.157112 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.157372 kubelet[2567]: E0413 20:17:16.157122 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.157566 kubelet[2567]: E0413 20:17:16.157499 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:17:16.157566 kubelet[2567]: W0413 20:17:16.157507 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:17:16.157566 kubelet[2567]: E0413 20:17:16.157515 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:17:16.567688 containerd[1477]: time="2026-04-13T20:17:16.567630073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:16.569855 containerd[1477]: time="2026-04-13T20:17:16.568711588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:17:16.569855 containerd[1477]: time="2026-04-13T20:17:16.568883579Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:16.570850 containerd[1477]: time="2026-04-13T20:17:16.570775509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:16.571862 containerd[1477]: time="2026-04-13T20:17:16.571464373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 900.659094ms" Apr 13 20:17:16.571862 containerd[1477]: time="2026-04-13T20:17:16.571503443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:17:16.576425 containerd[1477]: time="2026-04-13T20:17:16.576381689Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:17:16.589630 containerd[1477]: time="2026-04-13T20:17:16.589580129Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b\"" Apr 13 20:17:16.591909 containerd[1477]: time="2026-04-13T20:17:16.590654475Z" level=info msg="StartContainer for \"9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b\"" Apr 13 20:17:16.623820 systemd[1]: run-containerd-runc-k8s.io-9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b-runc.jcUwtr.mount: Deactivated successfully. Apr 13 20:17:16.632986 systemd[1]: Started cri-containerd-9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b.scope - libcontainer container 9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b. Apr 13 20:17:16.665162 containerd[1477]: time="2026-04-13T20:17:16.665049141Z" level=info msg="StartContainer for \"9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b\" returns successfully" Apr 13 20:17:16.687345 systemd[1]: cri-containerd-9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b.scope: Deactivated successfully. Apr 13 20:17:16.779799 containerd[1477]: time="2026-04-13T20:17:16.779716402Z" level=info msg="shim disconnected" id=9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b namespace=k8s.io Apr 13 20:17:16.779799 containerd[1477]: time="2026-04-13T20:17:16.779793943Z" level=warning msg="cleaning up after shim disconnected" id=9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b namespace=k8s.io Apr 13 20:17:16.779799 containerd[1477]: time="2026-04-13T20:17:16.779803683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:17:16.937121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b9d6e991bec965acbdf54c96c0acb55384a0aeefc77b0318e7925bc3b48375b-rootfs.mount: Deactivated successfully. Apr 13 20:17:17.122141 kubelet[2567]: I0413 20:17:17.120348 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:17.122141 kubelet[2567]: E0413 20:17:17.120642 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:17.122604 containerd[1477]: time="2026-04-13T20:17:17.121190445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:17:18.041171 kubelet[2567]: E0413 20:17:18.040555 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbkt6" podUID="71160270-58d3-403b-af73-5d23d46c4986" Apr 13 20:17:20.043797 kubelet[2567]: E0413 20:17:20.043372 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbkt6" podUID="71160270-58d3-403b-af73-5d23d46c4986" Apr 13 20:17:21.108939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230251719.mount: Deactivated successfully. Apr 13 20:17:21.137173 containerd[1477]: time="2026-04-13T20:17:21.137113575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:21.137972 containerd[1477]: time="2026-04-13T20:17:21.137857167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:17:21.138853 containerd[1477]: time="2026-04-13T20:17:21.138527500Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:21.140126 containerd[1477]: time="2026-04-13T20:17:21.140092086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:21.141376 containerd[1477]: time="2026-04-13T20:17:21.140929100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.019707915s" Apr 13 20:17:21.141376 containerd[1477]: time="2026-04-13T20:17:21.140961440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:17:21.145117 containerd[1477]: time="2026-04-13T20:17:21.145089067Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:17:21.157863 containerd[1477]: time="2026-04-13T20:17:21.157781617Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc\"" Apr 13 20:17:21.159590 containerd[1477]: time="2026-04-13T20:17:21.158629710Z" level=info msg="StartContainer for \"3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc\"" Apr 13 20:17:21.196952 systemd[1]: Started cri-containerd-3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc.scope - libcontainer container 3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc. Apr 13 20:17:21.225178 containerd[1477]: time="2026-04-13T20:17:21.225143136Z" level=info msg="StartContainer for \"3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc\" returns successfully" Apr 13 20:17:21.265456 systemd[1]: cri-containerd-3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc.scope: Deactivated successfully. Apr 13 20:17:21.363280 containerd[1477]: time="2026-04-13T20:17:21.362623215Z" level=info msg="shim disconnected" id=3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc namespace=k8s.io Apr 13 20:17:21.363280 containerd[1477]: time="2026-04-13T20:17:21.362665685Z" level=warning msg="cleaning up after shim disconnected" id=3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc namespace=k8s.io Apr 13 20:17:21.363280 containerd[1477]: time="2026-04-13T20:17:21.362674615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:17:22.041005 kubelet[2567]: E0413 20:17:22.040508 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbkt6" podUID="71160270-58d3-403b-af73-5d23d46c4986" Apr 13 20:17:22.109535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3973539a66a3df10013e1cefef42b6a04e5f122268fb778ab11bf452df393abc-rootfs.mount: Deactivated successfully. Apr 13 20:17:22.137305 containerd[1477]: time="2026-04-13T20:17:22.135723705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:17:23.733950 kubelet[2567]: I0413 20:17:23.733021 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:23.733950 kubelet[2567]: E0413 20:17:23.733379 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:24.041337 kubelet[2567]: E0413 20:17:24.041287 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xbkt6" podUID="71160270-58d3-403b-af73-5d23d46c4986" Apr 13 20:17:24.137255 kubelet[2567]: E0413 20:17:24.137229 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:24.335472 containerd[1477]: time="2026-04-13T20:17:24.335345595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:24.337025 containerd[1477]: time="2026-04-13T20:17:24.336931470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:17:24.341489 containerd[1477]: time="2026-04-13T20:17:24.337311342Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:24.344986 containerd[1477]: time="2026-04-13T20:17:24.344958327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:24.346119 containerd[1477]: time="2026-04-13T20:17:24.346088081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.208209498s" Apr 13 20:17:24.346225 containerd[1477]: time="2026-04-13T20:17:24.346206681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:17:24.350610 containerd[1477]: time="2026-04-13T20:17:24.350584256Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:17:24.362172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436320685.mount: Deactivated successfully. Apr 13 20:17:24.364814 containerd[1477]: time="2026-04-13T20:17:24.364777354Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97\"" Apr 13 20:17:24.366664 containerd[1477]: time="2026-04-13T20:17:24.365524877Z" level=info msg="StartContainer for \"bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97\"" Apr 13 20:17:24.401054 systemd[1]: Started cri-containerd-bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97.scope - libcontainer container bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97. Apr 13 20:17:24.428194 containerd[1477]: time="2026-04-13T20:17:24.428157338Z" level=info msg="StartContainer for \"bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97\" returns successfully" Apr 13 20:17:24.957667 containerd[1477]: time="2026-04-13T20:17:24.957628508Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:17:24.960381 systemd[1]: cri-containerd-bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97.scope: Deactivated successfully. Apr 13 20:17:24.983223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97-rootfs.mount: Deactivated successfully. Apr 13 20:17:24.986434 containerd[1477]: time="2026-04-13T20:17:24.986383255Z" level=info msg="shim disconnected" id=bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97 namespace=k8s.io Apr 13 20:17:24.986434 containerd[1477]: time="2026-04-13T20:17:24.986430455Z" level=warning msg="cleaning up after shim disconnected" id=bb9e9b2168aa8205785fb371966ab351fc8450e32c5891012f524962842b5b97 namespace=k8s.io Apr 13 20:17:24.986651 containerd[1477]: time="2026-04-13T20:17:24.986441595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:17:25.001663 kubelet[2567]: I0413 20:17:25.001564 2567 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 20:17:25.039861 systemd[1]: Created slice kubepods-burstable-pod06855e45_eda3_44b0_974d_ece42c1ccc17.slice - libcontainer container kubepods-burstable-pod06855e45_eda3_44b0_974d_ece42c1ccc17.slice. Apr 13 20:17:25.052633 systemd[1]: Created slice kubepods-burstable-pod22c0e952_c353_4318_b086_e79219dca900.slice - libcontainer container kubepods-burstable-pod22c0e952_c353_4318_b086_e79219dca900.slice. Apr 13 20:17:25.060390 systemd[1]: Created slice kubepods-besteffort-pod2e87e8c8_8e00_4dcc_840e_6674326e8d34.slice - libcontainer container kubepods-besteffort-pod2e87e8c8_8e00_4dcc_840e_6674326e8d34.slice. Apr 13 20:17:25.070193 systemd[1]: Created slice kubepods-besteffort-pod5d376255_a77b_4d63_b495_936717629436.slice - libcontainer container kubepods-besteffort-pod5d376255_a77b_4d63_b495_936717629436.slice. Apr 13 20:17:25.077877 systemd[1]: Created slice kubepods-besteffort-pod1254aaf0_2b67_4da3_beb0_35175e6f0de5.slice - libcontainer container kubepods-besteffort-pod1254aaf0_2b67_4da3_beb0_35175e6f0de5.slice. Apr 13 20:17:25.085705 systemd[1]: Created slice kubepods-besteffort-pod81c6c9d7_1fb2_40ea_825b_f4755d87f2cb.slice - libcontainer container kubepods-besteffort-pod81c6c9d7_1fb2_40ea_825b_f4755d87f2cb.slice. Apr 13 20:17:25.092252 systemd[1]: Created slice kubepods-besteffort-poda1b73c5c_4269_4782_9519_ef2714c0260c.slice - libcontainer container kubepods-besteffort-poda1b73c5c_4269_4782_9519_ef2714c0260c.slice. Apr 13 20:17:25.117669 kubelet[2567]: I0413 20:17:25.117213 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5d376255-a77b-4d63-b495-936717629436-goldmane-key-pair\") pod \"goldmane-5b85766d88-q4k2r\" (UID: \"5d376255-a77b-4d63-b495-936717629436\") " pod="calico-system/goldmane-5b85766d88-q4k2r" Apr 13 20:17:25.117669 kubelet[2567]: I0413 20:17:25.117251 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d376255-a77b-4d63-b495-936717629436-config\") pod \"goldmane-5b85766d88-q4k2r\" (UID: \"5d376255-a77b-4d63-b495-936717629436\") " pod="calico-system/goldmane-5b85766d88-q4k2r" Apr 13 20:17:25.117669 kubelet[2567]: I0413 20:17:25.117269 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsq5\" (UniqueName: \"kubernetes.io/projected/81c6c9d7-1fb2-40ea-825b-f4755d87f2cb-kube-api-access-5lsq5\") pod \"calico-apiserver-59c869df47-hltqb\" (UID: \"81c6c9d7-1fb2-40ea-825b-f4755d87f2cb\") " pod="calico-system/calico-apiserver-59c869df47-hltqb" Apr 13 20:17:25.117669 kubelet[2567]: I0413 20:17:25.117286 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1b73c5c-4269-4782-9519-ef2714c0260c-calico-apiserver-certs\") pod \"calico-apiserver-59c869df47-v98s4\" (UID: \"a1b73c5c-4269-4782-9519-ef2714c0260c\") " pod="calico-system/calico-apiserver-59c869df47-v98s4" Apr 13 20:17:25.117669 kubelet[2567]: I0413 20:17:25.117306 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjg8\" (UniqueName: \"kubernetes.io/projected/06855e45-eda3-44b0-974d-ece42c1ccc17-kube-api-access-qxjg8\") pod \"coredns-674b8bbfcf-t5cs7\" (UID: \"06855e45-eda3-44b0-974d-ece42c1ccc17\") " pod="kube-system/coredns-674b8bbfcf-t5cs7" Apr 13 20:17:25.117915 kubelet[2567]: I0413 20:17:25.117322 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzz2\" (UniqueName: \"kubernetes.io/projected/1254aaf0-2b67-4da3-beb0-35175e6f0de5-kube-api-access-mlzz2\") pod \"calico-kube-controllers-79dcc8dcc6-bdhfl\" (UID: \"1254aaf0-2b67-4da3-beb0-35175e6f0de5\") " pod="calico-system/calico-kube-controllers-79dcc8dcc6-bdhfl" Apr 13 20:17:25.117915 kubelet[2567]: I0413 20:17:25.117343 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-nginx-config\") pod \"whisker-7496d95f5c-6h5ps\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " pod="calico-system/whisker-7496d95f5c-6h5ps" Apr 13 20:17:25.117915 kubelet[2567]: I0413 20:17:25.117360 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06855e45-eda3-44b0-974d-ece42c1ccc17-config-volume\") pod \"coredns-674b8bbfcf-t5cs7\" (UID: \"06855e45-eda3-44b0-974d-ece42c1ccc17\") " pod="kube-system/coredns-674b8bbfcf-t5cs7" Apr 13 20:17:25.117915 kubelet[2567]: I0413 20:17:25.117386 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-ca-bundle\") pod \"whisker-7496d95f5c-6h5ps\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " pod="calico-system/whisker-7496d95f5c-6h5ps" Apr 13 20:17:25.117915 kubelet[2567]: I0413 20:17:25.117403 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqft\" (UniqueName: \"kubernetes.io/projected/5d376255-a77b-4d63-b495-936717629436-kube-api-access-tdqft\") pod \"goldmane-5b85766d88-q4k2r\" (UID: \"5d376255-a77b-4d63-b495-936717629436\") " pod="calico-system/goldmane-5b85766d88-q4k2r" Apr 13 20:17:25.118049 kubelet[2567]: I0413 20:17:25.117417 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4pk\" (UniqueName: \"kubernetes.io/projected/a1b73c5c-4269-4782-9519-ef2714c0260c-kube-api-access-2t4pk\") pod \"calico-apiserver-59c869df47-v98s4\" (UID: \"a1b73c5c-4269-4782-9519-ef2714c0260c\") " pod="calico-system/calico-apiserver-59c869df47-v98s4" Apr 13 20:17:25.118049 kubelet[2567]: I0413 20:17:25.117432 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-backend-key-pair\") pod \"whisker-7496d95f5c-6h5ps\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " pod="calico-system/whisker-7496d95f5c-6h5ps" Apr 13 20:17:25.118049 kubelet[2567]: I0413 20:17:25.117444 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56q9l\" (UniqueName: \"kubernetes.io/projected/2e87e8c8-8e00-4dcc-840e-6674326e8d34-kube-api-access-56q9l\") pod \"whisker-7496d95f5c-6h5ps\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " pod="calico-system/whisker-7496d95f5c-6h5ps" Apr 13 20:17:25.118049 kubelet[2567]: I0413 20:17:25.117506 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/81c6c9d7-1fb2-40ea-825b-f4755d87f2cb-calico-apiserver-certs\") pod \"calico-apiserver-59c869df47-hltqb\" (UID: \"81c6c9d7-1fb2-40ea-825b-f4755d87f2cb\") " pod="calico-system/calico-apiserver-59c869df47-hltqb" Apr 13 20:17:25.118049 kubelet[2567]: I0413 20:17:25.117522 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22c0e952-c353-4318-b086-e79219dca900-config-volume\") pod \"coredns-674b8bbfcf-7bgbg\" (UID: \"22c0e952-c353-4318-b086-e79219dca900\") " pod="kube-system/coredns-674b8bbfcf-7bgbg" Apr 13 20:17:25.118154 kubelet[2567]: I0413 20:17:25.117537 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d376255-a77b-4d63-b495-936717629436-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-q4k2r\" (UID: \"5d376255-a77b-4d63-b495-936717629436\") " pod="calico-system/goldmane-5b85766d88-q4k2r" Apr 13 20:17:25.118154 kubelet[2567]: I0413 20:17:25.117551 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1254aaf0-2b67-4da3-beb0-35175e6f0de5-tigera-ca-bundle\") pod \"calico-kube-controllers-79dcc8dcc6-bdhfl\" (UID: \"1254aaf0-2b67-4da3-beb0-35175e6f0de5\") " pod="calico-system/calico-kube-controllers-79dcc8dcc6-bdhfl" Apr 13 20:17:25.118154 kubelet[2567]: I0413 20:17:25.117564 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlgp\" (UniqueName: \"kubernetes.io/projected/22c0e952-c353-4318-b086-e79219dca900-kube-api-access-qtlgp\") pod \"coredns-674b8bbfcf-7bgbg\" (UID: \"22c0e952-c353-4318-b086-e79219dca900\") " pod="kube-system/coredns-674b8bbfcf-7bgbg" Apr 13 20:17:25.165859 containerd[1477]: time="2026-04-13T20:17:25.163846276Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:17:25.175100 containerd[1477]: time="2026-04-13T20:17:25.175069931Z" level=info msg="CreateContainer within sandbox \"39f766f4a8b9278bf61174c1e8104385015ce6bc6ff3e39d2ca1d062e2f1bff2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9\"" Apr 13 20:17:25.175588 containerd[1477]: time="2026-04-13T20:17:25.175565802Z" level=info msg="StartContainer for \"37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9\"" Apr 13 20:17:25.203977 systemd[1]: Started cri-containerd-37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9.scope - libcontainer container 37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9. Apr 13 20:17:25.263010 containerd[1477]: time="2026-04-13T20:17:25.262295791Z" level=info msg="StartContainer for \"37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9\" returns successfully" Apr 13 20:17:25.349079 kubelet[2567]: E0413 20:17:25.349046 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:25.350909 containerd[1477]: time="2026-04-13T20:17:25.350861094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t5cs7,Uid:06855e45-eda3-44b0-974d-ece42c1ccc17,Namespace:kube-system,Attempt:0,}" Apr 13 20:17:25.359052 kubelet[2567]: E0413 20:17:25.356409 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:25.359795 containerd[1477]: time="2026-04-13T20:17:25.359766023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7bgbg,Uid:22c0e952-c353-4318-b086-e79219dca900,Namespace:kube-system,Attempt:0,}" Apr 13 20:17:25.376506 containerd[1477]: time="2026-04-13T20:17:25.376472756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q4k2r,Uid:5d376255-a77b-4d63-b495-936717629436,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:25.376740 containerd[1477]: time="2026-04-13T20:17:25.376655016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7496d95f5c-6h5ps,Uid:2e87e8c8-8e00-4dcc-840e-6674326e8d34,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:25.395210 containerd[1477]: time="2026-04-13T20:17:25.394970365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c869df47-hltqb,Uid:81c6c9d7-1fb2-40ea-825b-f4755d87f2cb,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:25.411239 containerd[1477]: time="2026-04-13T20:17:25.410569115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dcc8dcc6-bdhfl,Uid:1254aaf0-2b67-4da3-beb0-35175e6f0de5,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:25.416993 containerd[1477]: time="2026-04-13T20:17:25.416963445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c869df47-v98s4,Uid:a1b73c5c-4269-4782-9519-ef2714c0260c,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:25.946483 systemd-networkd[1380]: calic820fe5192c: Link UP Apr 13 20:17:25.951433 systemd-networkd[1380]: calic820fe5192c: Gained carrier Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.623 [ERROR][3495] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.656 [INFO][3495] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0 whisker-7496d95f5c- calico-system 2e87e8c8-8e00-4dcc-840e-6674326e8d34 859 0 2026-04-13 20:17:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7496d95f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-25-54 whisker-7496d95f5c-6h5ps eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic820fe5192c [] [] }} ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.656 [INFO][3495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.771 [INFO][3560] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.782 [INFO][3560] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ed200), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-25-54", "pod":"whisker-7496d95f5c-6h5ps", "timestamp":"2026-04-13 20:17:25.771996361 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b0580)} Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.782 [INFO][3560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.782 [INFO][3560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.782 [INFO][3560] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.785 [INFO][3560] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.801 [INFO][3560] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.866 [INFO][3560] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.871 [INFO][3560] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.876 [INFO][3560] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.886 [INFO][3560] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.891 [INFO][3560] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.891 [INFO][3560] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="172-234-25-54" subnet=192.168.76.0/26 Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.904 [INFO][3560] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.904 [INFO][3560] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="172-234-25-54" subnet=192.168.76.0/26 Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.907 [INFO][3560] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="172-234-25-54" subnet=192.168.76.0/26 Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.908 [INFO][3560] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.909 [INFO][3560] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907 Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.913 [INFO][3560] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.917 [INFO][3560] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.0/26] block=192.168.76.0/26 handle="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.917 [INFO][3560] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.0/26] handle="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" host="172-234-25-54" Apr 13 20:17:25.993903 containerd[1477]: 2026-04-13 20:17:25.917 [INFO][3560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:25.994706 containerd[1477]: 2026-04-13 20:17:25.917 [INFO][3560] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.0/26] IPv6=[] ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:25.994706 containerd[1477]: 2026-04-13 20:17:25.926 [INFO][3495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0", GenerateName:"whisker-7496d95f5c-", Namespace:"calico-system", SelfLink:"", UID:"2e87e8c8-8e00-4dcc-840e-6674326e8d34", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7496d95f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"whisker-7496d95f5c-6h5ps", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic820fe5192c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:25.994706 containerd[1477]: 2026-04-13 20:17:25.927 [INFO][3495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.0/32] ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:25.994706 containerd[1477]: 2026-04-13 20:17:25.927 [INFO][3495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic820fe5192c ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:25.994706 containerd[1477]: 2026-04-13 20:17:25.953 [INFO][3495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:25.994706 containerd[1477]: 2026-04-13 20:17:25.954 [INFO][3495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0", GenerateName:"whisker-7496d95f5c-", Namespace:"calico-system", SelfLink:"", UID:"2e87e8c8-8e00-4dcc-840e-6674326e8d34", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7496d95f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907", Pod:"whisker-7496d95f5c-6h5ps", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic820fe5192c", MAC:"36:ea:97:b0:fa:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:25.994706 containerd[1477]: 2026-04-13 20:17:25.972 [INFO][3495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Namespace="calico-system" Pod="whisker-7496d95f5c-6h5ps" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:25.994092 systemd-networkd[1380]: cali86d47cf2cfe: Link UP Apr 13 20:17:25.997437 systemd-networkd[1380]: cali86d47cf2cfe: Gained carrier Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.544 [ERROR][3472] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.618 [INFO][3472] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0 coredns-674b8bbfcf- kube-system 06855e45-eda3-44b0-974d-ece42c1ccc17 834 0 2026-04-13 20:17:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-25-54 coredns-674b8bbfcf-t5cs7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86d47cf2cfe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.618 [INFO][3472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.765 [INFO][3555] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" HandleID="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Workload="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.785 [INFO][3555] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" HandleID="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Workload="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001221d0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-25-54", "pod":"coredns-674b8bbfcf-t5cs7", "timestamp":"2026-04-13 20:17:25.765789511 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f82c0)} Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.785 [INFO][3555] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.917 [INFO][3555] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.918 [INFO][3555] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.922 [INFO][3555] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.930 [INFO][3555] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.937 [INFO][3555] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.939 [INFO][3555] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.944 [INFO][3555] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.944 [INFO][3555] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.949 [INFO][3555] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74 Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.964 [INFO][3555] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.973 [INFO][3555] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.1/26] block=192.168.76.0/26 handle="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.973 [INFO][3555] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.1/26] handle="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" host="172-234-25-54" Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.975 [INFO][3555] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:26.029856 containerd[1477]: 2026-04-13 20:17:25.975 [INFO][3555] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.1/26] IPv6=[] ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" HandleID="k8s-pod-network.63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Workload="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" Apr 13 20:17:26.030406 containerd[1477]: 2026-04-13 20:17:25.987 [INFO][3472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"06855e45-eda3-44b0-974d-ece42c1ccc17", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"coredns-674b8bbfcf-t5cs7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d47cf2cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.030406 containerd[1477]: 2026-04-13 20:17:25.987 [INFO][3472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.1/32] ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" Apr 13 20:17:26.030406 containerd[1477]: 2026-04-13 20:17:25.987 [INFO][3472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86d47cf2cfe ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" Apr 13 20:17:26.030406 containerd[1477]: 2026-04-13 20:17:25.999 [INFO][3472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" Apr 13 20:17:26.030406 containerd[1477]: 2026-04-13 20:17:25.999 [INFO][3472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"06855e45-eda3-44b0-974d-ece42c1ccc17", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74", Pod:"coredns-674b8bbfcf-t5cs7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d47cf2cfe", MAC:"62:9c:ef:2b:37:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.030406 containerd[1477]: 2026-04-13 20:17:26.016 [INFO][3472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74" Namespace="kube-system" Pod="coredns-674b8bbfcf-t5cs7" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--t5cs7-eth0" Apr 13 20:17:26.053296 systemd[1]: Created slice kubepods-besteffort-pod71160270_58d3_403b_af73_5d23d46c4986.slice - libcontainer container kubepods-besteffort-pod71160270_58d3_403b_af73_5d23d46c4986.slice. Apr 13 20:17:26.061890 containerd[1477]: time="2026-04-13T20:17:26.060337534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.061890 containerd[1477]: time="2026-04-13T20:17:26.060455524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.061890 containerd[1477]: time="2026-04-13T20:17:26.060471505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.061890 containerd[1477]: time="2026-04-13T20:17:26.060594085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.061890 containerd[1477]: time="2026-04-13T20:17:26.058736839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbkt6,Uid:71160270-58d3-403b-af73-5d23d46c4986,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:26.103537 systemd-networkd[1380]: cali3807590b1a1: Link UP Apr 13 20:17:26.109692 systemd-networkd[1380]: cali3807590b1a1: Gained carrier Apr 13 20:17:26.124011 systemd[1]: Started cri-containerd-8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907.scope - libcontainer container 8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907. Apr 13 20:17:26.127300 containerd[1477]: time="2026-04-13T20:17:26.125680212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.127300 containerd[1477]: time="2026-04-13T20:17:26.125732712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.127300 containerd[1477]: time="2026-04-13T20:17:26.125746492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.127300 containerd[1477]: time="2026-04-13T20:17:26.125821592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.611 [ERROR][3496] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.671 [INFO][3496] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0 calico-kube-controllers-79dcc8dcc6- calico-system 1254aaf0-2b67-4da3-beb0-35175e6f0de5 846 0 2026-04-13 20:17:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79dcc8dcc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-25-54 calico-kube-controllers-79dcc8dcc6-bdhfl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3807590b1a1 [] [] }} ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.672 [INFO][3496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.815 [INFO][3572] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" HandleID="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Workload="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.828 [INFO][3572] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" HandleID="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Workload="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378200), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-25-54", "pod":"calico-kube-controllers-79dcc8dcc6-bdhfl", "timestamp":"2026-04-13 20:17:25.81540334 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00043eb00)} Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.828 [INFO][3572] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.973 [INFO][3572] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:25.974 [INFO][3572] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.024 [INFO][3572] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.033 [INFO][3572] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.063 [INFO][3572] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.067 [INFO][3572] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.070 [INFO][3572] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.071 [INFO][3572] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.074 [INFO][3572] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8 Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.084 [INFO][3572] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.090 [INFO][3572] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.3/26] block=192.168.76.0/26 handle="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.090 [INFO][3572] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.3/26] handle="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" host="172-234-25-54" Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.090 [INFO][3572] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:26.144554 containerd[1477]: 2026-04-13 20:17:26.091 [INFO][3572] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.3/26] IPv6=[] ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" HandleID="k8s-pod-network.4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Workload="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" Apr 13 20:17:26.145268 containerd[1477]: 2026-04-13 20:17:26.094 [INFO][3496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0", GenerateName:"calico-kube-controllers-79dcc8dcc6-", Namespace:"calico-system", SelfLink:"", UID:"1254aaf0-2b67-4da3-beb0-35175e6f0de5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dcc8dcc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"calico-kube-controllers-79dcc8dcc6-bdhfl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3807590b1a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.145268 containerd[1477]: 2026-04-13 20:17:26.096 [INFO][3496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.3/32] ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" Apr 13 20:17:26.145268 containerd[1477]: 2026-04-13 20:17:26.096 [INFO][3496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3807590b1a1 ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" Apr 13 20:17:26.145268 containerd[1477]: 2026-04-13 20:17:26.112 [INFO][3496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" Apr 13 20:17:26.145268 containerd[1477]: 2026-04-13 20:17:26.118 [INFO][3496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0", GenerateName:"calico-kube-controllers-79dcc8dcc6-", Namespace:"calico-system", SelfLink:"", UID:"1254aaf0-2b67-4da3-beb0-35175e6f0de5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dcc8dcc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8", Pod:"calico-kube-controllers-79dcc8dcc6-bdhfl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3807590b1a1", MAC:"de:80:c9:9f:aa:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.145268 containerd[1477]: 2026-04-13 20:17:26.137 [INFO][3496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8" Namespace="calico-system" Pod="calico-kube-controllers-79dcc8dcc6-bdhfl" WorkloadEndpoint="172--234--25--54-k8s-calico--kube--controllers--79dcc8dcc6--bdhfl-eth0" Apr 13 20:17:26.176017 systemd[1]: Started cri-containerd-63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74.scope - libcontainer container 63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74. Apr 13 20:17:26.180019 containerd[1477]: time="2026-04-13T20:17:26.179176854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.180019 containerd[1477]: time="2026-04-13T20:17:26.179475775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.180019 containerd[1477]: time="2026-04-13T20:17:26.179498315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.180019 containerd[1477]: time="2026-04-13T20:17:26.179620095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.185429 kubelet[2567]: I0413 20:17:26.185199 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t7k6n" podStartSLOduration=3.137641015 podStartE2EDuration="13.185185152s" podCreationTimestamp="2026-04-13 20:17:13 +0000 UTC" firstStartedPulling="2026-04-13 20:17:14.30027185 +0000 UTC m=+18.363973886" lastFinishedPulling="2026-04-13 20:17:24.347815977 +0000 UTC m=+28.411518023" observedRunningTime="2026-04-13 20:17:26.184031248 +0000 UTC m=+30.247733294" watchObservedRunningTime="2026-04-13 20:17:26.185185152 +0000 UTC m=+30.248887188" Apr 13 20:17:26.230005 systemd[1]: Started cri-containerd-4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8.scope - libcontainer container 4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8. Apr 13 20:17:26.240924 systemd-networkd[1380]: cali0a7ae762185: Link UP Apr 13 20:17:26.248718 systemd-networkd[1380]: cali0a7ae762185: Gained carrier Apr 13 20:17:26.276356 containerd[1477]: time="2026-04-13T20:17:26.276318968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t5cs7,Uid:06855e45-eda3-44b0-974d-ece42c1ccc17,Namespace:kube-system,Attempt:0,} returns sandbox id \"63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74\"" Apr 13 20:17:26.280744 kubelet[2567]: E0413 20:17:26.280713 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:26.290220 containerd[1477]: time="2026-04-13T20:17:26.290195470Z" level=info msg="CreateContainer within sandbox \"63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:25.626 [ERROR][3517] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:25.668 [INFO][3517] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0 calico-apiserver-59c869df47- calico-system 81c6c9d7-1fb2-40ea-825b-f4755d87f2cb 843 0 2026-04-13 20:17:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c869df47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-25-54 calico-apiserver-59c869df47-hltqb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0a7ae762185 [] [] }} ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:25.671 [INFO][3517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:25.835 [INFO][3569] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" HandleID="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Workload="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:25.845 [INFO][3569] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" HandleID="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Workload="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-25-54", "pod":"calico-apiserver-59c869df47-hltqb", "timestamp":"2026-04-13 20:17:25.835534304 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002dedc0)} Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:25.845 [INFO][3569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.091 [INFO][3569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.091 [INFO][3569] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.126 [INFO][3569] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.138 [INFO][3569] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.180 [INFO][3569] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.190 [INFO][3569] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.193 [INFO][3569] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.193 [INFO][3569] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.195 [INFO][3569] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.200 [INFO][3569] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.205 [INFO][3569] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.4/26] block=192.168.76.0/26 handle="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.205 [INFO][3569] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.4/26] handle="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" host="172-234-25-54" Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.205 [INFO][3569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:26.298113 containerd[1477]: 2026-04-13 20:17:26.205 [INFO][3569] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.4/26] IPv6=[] ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" HandleID="k8s-pod-network.7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Workload="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" Apr 13 20:17:26.299234 containerd[1477]: 2026-04-13 20:17:26.230 [INFO][3517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0", GenerateName:"calico-apiserver-59c869df47-", Namespace:"calico-system", SelfLink:"", UID:"81c6c9d7-1fb2-40ea-825b-f4755d87f2cb", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c869df47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"calico-apiserver-59c869df47-hltqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0a7ae762185", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.299234 containerd[1477]: 2026-04-13 20:17:26.231 [INFO][3517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.4/32] ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" Apr 13 20:17:26.299234 containerd[1477]: 2026-04-13 20:17:26.231 [INFO][3517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a7ae762185 ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" Apr 13 20:17:26.299234 containerd[1477]: 2026-04-13 20:17:26.253 [INFO][3517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" Apr 13 20:17:26.299234 containerd[1477]: 2026-04-13 20:17:26.260 [INFO][3517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0", GenerateName:"calico-apiserver-59c869df47-", Namespace:"calico-system", SelfLink:"", UID:"81c6c9d7-1fb2-40ea-825b-f4755d87f2cb", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c869df47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd", Pod:"calico-apiserver-59c869df47-hltqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0a7ae762185", MAC:"ea:6c:97:12:63:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.299234 containerd[1477]: 2026-04-13 20:17:26.281 [INFO][3517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd" Namespace="calico-system" Pod="calico-apiserver-59c869df47-hltqb" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--hltqb-eth0" Apr 13 20:17:26.314422 containerd[1477]: time="2026-04-13T20:17:26.314010232Z" level=info msg="CreateContainer within sandbox \"63ab17486dfde1bc7a8ccd188cdf658487effd5f18371feb662108dbbdaa1c74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7eb02dc41d263eb7b4829bc816e5e58d19049722b9bd89c8cd98b19472beb2fc\"" Apr 13 20:17:26.314693 containerd[1477]: time="2026-04-13T20:17:26.314649265Z" level=info msg="StartContainer for \"7eb02dc41d263eb7b4829bc816e5e58d19049722b9bd89c8cd98b19472beb2fc\"" Apr 13 20:17:26.336514 systemd-networkd[1380]: cali180435fd7f7: Link UP Apr 13 20:17:26.338126 systemd-networkd[1380]: cali180435fd7f7: Gained carrier Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:25.571 [ERROR][3466] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:25.680 [INFO][3466] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0 coredns-674b8bbfcf- kube-system 22c0e952-c353-4318-b086-e79219dca900 837 0 2026-04-13 20:17:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-25-54 coredns-674b8bbfcf-7bgbg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali180435fd7f7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:25.680 [INFO][3466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:25.837 [INFO][3568] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" HandleID="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Workload="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:25.859 [INFO][3568] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" HandleID="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Workload="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-25-54", "pod":"coredns-674b8bbfcf-7bgbg", "timestamp":"2026-04-13 20:17:25.837094159 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002cf8c0)} Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:25.859 [INFO][3568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.206 [INFO][3568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.206 [INFO][3568] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.225 [INFO][3568] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.246 [INFO][3568] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.255 [INFO][3568] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.258 [INFO][3568] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.261 [INFO][3568] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.261 [INFO][3568] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.265 [INFO][3568] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9 Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.272 [INFO][3568] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.296 [INFO][3568] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.5/26] block=192.168.76.0/26 handle="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.296 [INFO][3568] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.5/26] handle="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" host="172-234-25-54" Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.296 [INFO][3568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:26.421890 containerd[1477]: 2026-04-13 20:17:26.296 [INFO][3568] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.5/26] IPv6=[] ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" HandleID="k8s-pod-network.a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Workload="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" Apr 13 20:17:26.422761 containerd[1477]: 2026-04-13 20:17:26.320 [INFO][3466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22c0e952-c353-4318-b086-e79219dca900", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"coredns-674b8bbfcf-7bgbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali180435fd7f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.422761 containerd[1477]: 2026-04-13 20:17:26.320 [INFO][3466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.5/32] ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" Apr 13 20:17:26.422761 containerd[1477]: 2026-04-13 20:17:26.320 [INFO][3466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali180435fd7f7 ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" Apr 13 20:17:26.422761 containerd[1477]: 2026-04-13 20:17:26.342 [INFO][3466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" Apr 13 20:17:26.422761 containerd[1477]: 2026-04-13 20:17:26.345 [INFO][3466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22c0e952-c353-4318-b086-e79219dca900", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9", Pod:"coredns-674b8bbfcf-7bgbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali180435fd7f7", MAC:"7e:b6:dd:c3:e2:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.422761 containerd[1477]: 2026-04-13 20:17:26.383 [INFO][3466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9" Namespace="kube-system" Pod="coredns-674b8bbfcf-7bgbg" WorkloadEndpoint="172--234--25--54-k8s-coredns--674b8bbfcf--7bgbg-eth0" Apr 13 20:17:26.458972 systemd[1]: Started cri-containerd-7eb02dc41d263eb7b4829bc816e5e58d19049722b9bd89c8cd98b19472beb2fc.scope - libcontainer container 7eb02dc41d263eb7b4829bc816e5e58d19049722b9bd89c8cd98b19472beb2fc. Apr 13 20:17:26.471151 containerd[1477]: time="2026-04-13T20:17:26.470662308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.471151 containerd[1477]: time="2026-04-13T20:17:26.470735668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.471151 containerd[1477]: time="2026-04-13T20:17:26.470749558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.471151 containerd[1477]: time="2026-04-13T20:17:26.470854918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.472651 containerd[1477]: time="2026-04-13T20:17:26.472410642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dcc8dcc6-bdhfl,Uid:1254aaf0-2b67-4da3-beb0-35175e6f0de5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8\"" Apr 13 20:17:26.475023 containerd[1477]: time="2026-04-13T20:17:26.474988821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:17:26.496034 containerd[1477]: time="2026-04-13T20:17:26.496006094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7496d95f5c-6h5ps,Uid:2e87e8c8-8e00-4dcc-840e-6674326e8d34,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\"" Apr 13 20:17:26.525982 systemd[1]: Started cri-containerd-7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd.scope - libcontainer container 7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd. Apr 13 20:17:26.551914 systemd-networkd[1380]: cali362adafce7f: Link UP Apr 13 20:17:26.561742 systemd-networkd[1380]: cali362adafce7f: Gained carrier Apr 13 20:17:26.593088 containerd[1477]: time="2026-04-13T20:17:26.592635488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.593088 containerd[1477]: time="2026-04-13T20:17:26.592690998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.593088 containerd[1477]: time="2026-04-13T20:17:26.592709458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.593088 containerd[1477]: time="2026-04-13T20:17:26.592988229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.610071 containerd[1477]: time="2026-04-13T20:17:26.608644476Z" level=info msg="StartContainer for \"7eb02dc41d263eb7b4829bc816e5e58d19049722b9bd89c8cd98b19472beb2fc\" returns successfully" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:25.647 [ERROR][3509] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:25.729 [INFO][3509] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0 calico-apiserver-59c869df47- calico-system a1b73c5c-4269-4782-9519-ef2714c0260c 844 0 2026-04-13 20:17:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c869df47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-25-54 calico-apiserver-59c869df47-v98s4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali362adafce7f [] [] }} ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:25.729 [INFO][3509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:25.850 [INFO][3584] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" HandleID="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Workload="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:25.874 [INFO][3584] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" HandleID="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Workload="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103940), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-25-54", "pod":"calico-apiserver-59c869df47-v98s4", "timestamp":"2026-04-13 20:17:25.850509553 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00014e6e0)} Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:25.875 [INFO][3584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.298 [INFO][3584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.298 [INFO][3584] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.327 [INFO][3584] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.365 [INFO][3584] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.408 [INFO][3584] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.434 [INFO][3584] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.452 [INFO][3584] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.453 [INFO][3584] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.464 [INFO][3584] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4 Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.494 [INFO][3584] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.521 [INFO][3584] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.6/26] block=192.168.76.0/26 handle="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.522 [INFO][3584] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.6/26] handle="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" host="172-234-25-54" Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.522 [INFO][3584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:26.624122 containerd[1477]: 2026-04-13 20:17:26.522 [INFO][3584] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.6/26] IPv6=[] ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" HandleID="k8s-pod-network.e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Workload="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" Apr 13 20:17:26.624638 containerd[1477]: 2026-04-13 20:17:26.540 [INFO][3509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0", GenerateName:"calico-apiserver-59c869df47-", Namespace:"calico-system", SelfLink:"", UID:"a1b73c5c-4269-4782-9519-ef2714c0260c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c869df47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"calico-apiserver-59c869df47-v98s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali362adafce7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.624638 containerd[1477]: 2026-04-13 20:17:26.540 [INFO][3509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.6/32] ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" Apr 13 20:17:26.624638 containerd[1477]: 2026-04-13 20:17:26.540 [INFO][3509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali362adafce7f ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" Apr 13 20:17:26.624638 containerd[1477]: 2026-04-13 20:17:26.569 [INFO][3509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" Apr 13 20:17:26.624638 containerd[1477]: 2026-04-13 20:17:26.573 [INFO][3509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0", GenerateName:"calico-apiserver-59c869df47-", Namespace:"calico-system", SelfLink:"", UID:"a1b73c5c-4269-4782-9519-ef2714c0260c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c869df47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4", Pod:"calico-apiserver-59c869df47-v98s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali362adafce7f", MAC:"4a:0a:20:a4:64:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.624638 containerd[1477]: 2026-04-13 20:17:26.604 [INFO][3509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4" Namespace="calico-system" Pod="calico-apiserver-59c869df47-v98s4" WorkloadEndpoint="172--234--25--54-k8s-calico--apiserver--59c869df47--v98s4-eth0" Apr 13 20:17:26.662423 systemd-networkd[1380]: cali41351481538: Link UP Apr 13 20:17:26.664963 systemd-networkd[1380]: cali41351481538: Gained carrier Apr 13 20:17:26.682099 systemd[1]: Started cri-containerd-a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9.scope - libcontainer container a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9. Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:25.681 [ERROR][3485] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:25.743 [INFO][3485] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0 goldmane-5b85766d88- calico-system 5d376255-a77b-4d63-b495-936717629436 841 0 2026-04-13 20:17:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-25-54 goldmane-5b85766d88-q4k2r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali41351481538 [] [] }} ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:25.744 [INFO][3485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:25.879 [INFO][3591] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" HandleID="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Workload="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:25.892 [INFO][3591] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" HandleID="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Workload="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f7f50), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-25-54", "pod":"goldmane-5b85766d88-q4k2r", "timestamp":"2026-04-13 20:17:25.879591055 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00031af20)} Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:25.892 [INFO][3591] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.522 [INFO][3591] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.523 [INFO][3591] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.549 [INFO][3591] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.570 [INFO][3591] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.609 [INFO][3591] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.613 [INFO][3591] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.621 [INFO][3591] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.621 [INFO][3591] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.623 [INFO][3591] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.631 [INFO][3591] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.645 [INFO][3591] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.7/26] block=192.168.76.0/26 handle="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.645 [INFO][3591] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.7/26] handle="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" host="172-234-25-54" Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.645 [INFO][3591] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:26.725057 containerd[1477]: 2026-04-13 20:17:26.645 [INFO][3591] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.7/26] IPv6=[] ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" HandleID="k8s-pod-network.852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Workload="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" Apr 13 20:17:26.725539 containerd[1477]: 2026-04-13 20:17:26.658 [INFO][3485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5d376255-a77b-4d63-b495-936717629436", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"goldmane-5b85766d88-q4k2r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41351481538", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.725539 containerd[1477]: 2026-04-13 20:17:26.658 [INFO][3485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.7/32] ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" Apr 13 20:17:26.725539 containerd[1477]: 2026-04-13 20:17:26.659 [INFO][3485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41351481538 ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" Apr 13 20:17:26.725539 containerd[1477]: 2026-04-13 20:17:26.666 [INFO][3485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" Apr 13 20:17:26.725539 containerd[1477]: 2026-04-13 20:17:26.679 [INFO][3485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"5d376255-a77b-4d63-b495-936717629436", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c", Pod:"goldmane-5b85766d88-q4k2r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41351481538", MAC:"ea:97:61:80:a3:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.725539 containerd[1477]: 2026-04-13 20:17:26.714 [INFO][3485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c" Namespace="calico-system" Pod="goldmane-5b85766d88-q4k2r" WorkloadEndpoint="172--234--25--54-k8s-goldmane--5b85766d88--q4k2r-eth0" Apr 13 20:17:26.733315 containerd[1477]: time="2026-04-13T20:17:26.731672029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.733315 containerd[1477]: time="2026-04-13T20:17:26.731739579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.733315 containerd[1477]: time="2026-04-13T20:17:26.731750699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.733315 containerd[1477]: time="2026-04-13T20:17:26.732551171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.774997 systemd-networkd[1380]: calibae3ea8f99d: Link UP Apr 13 20:17:26.784302 systemd-networkd[1380]: calibae3ea8f99d: Gained carrier Apr 13 20:17:26.787133 systemd[1]: Started cri-containerd-e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4.scope - libcontainer container e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4. Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.192 [ERROR][3670] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.227 [INFO][3670] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-csi--node--driver--xbkt6-eth0 csi-node-driver- calico-system 71160270-58d3-403b-af73-5d23d46c4986 710 0 2026-04-13 20:17:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-25-54 csi-node-driver-xbkt6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibae3ea8f99d [] [] }} ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.228 [INFO][3670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.475 [INFO][3763] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" HandleID="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Workload="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.512 [INFO][3763] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" HandleID="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Workload="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f100), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-25-54", "pod":"csi-node-driver-xbkt6", "timestamp":"2026-04-13 20:17:26.475744753 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005ee160)} Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.512 [INFO][3763] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.646 [INFO][3763] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.647 [INFO][3763] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.651 [INFO][3763] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.687 [INFO][3763] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.705 [INFO][3763] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.709 [INFO][3763] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.711 [INFO][3763] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.712 [INFO][3763] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.715 [INFO][3763] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458 Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.718 [INFO][3763] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.736 [INFO][3763] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.8/26] block=192.168.76.0/26 handle="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.736 [INFO][3763] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.8/26] handle="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" host="172-234-25-54" Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.736 [INFO][3763] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:26.823461 containerd[1477]: 2026-04-13 20:17:26.737 [INFO][3763] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.8/26] IPv6=[] ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" HandleID="k8s-pod-network.b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Workload="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" Apr 13 20:17:26.826042 containerd[1477]: 2026-04-13 20:17:26.750 [INFO][3670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-csi--node--driver--xbkt6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71160270-58d3-403b-af73-5d23d46c4986", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"csi-node-driver-xbkt6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibae3ea8f99d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.826042 containerd[1477]: 2026-04-13 20:17:26.750 [INFO][3670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.8/32] ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" Apr 13 20:17:26.826042 containerd[1477]: 2026-04-13 20:17:26.750 [INFO][3670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibae3ea8f99d ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" Apr 13 20:17:26.826042 containerd[1477]: 2026-04-13 20:17:26.789 [INFO][3670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" Apr 13 20:17:26.826042 containerd[1477]: 2026-04-13 20:17:26.795 [INFO][3670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-csi--node--driver--xbkt6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71160270-58d3-403b-af73-5d23d46c4986", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458", Pod:"csi-node-driver-xbkt6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibae3ea8f99d", MAC:"92:35:2a:df:c6:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:26.826042 containerd[1477]: 2026-04-13 20:17:26.809 [INFO][3670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458" Namespace="calico-system" Pod="csi-node-driver-xbkt6" WorkloadEndpoint="172--234--25--54-k8s-csi--node--driver--xbkt6-eth0" Apr 13 20:17:26.843678 containerd[1477]: time="2026-04-13T20:17:26.843538588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7bgbg,Uid:22c0e952-c353-4318-b086-e79219dca900,Namespace:kube-system,Attempt:0,} returns sandbox id \"a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9\"" Apr 13 20:17:26.847245 kubelet[2567]: E0413 20:17:26.847198 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:26.859534 containerd[1477]: time="2026-04-13T20:17:26.859272056Z" level=info msg="CreateContainer within sandbox \"a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:17:26.868524 containerd[1477]: time="2026-04-13T20:17:26.865632465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.868524 containerd[1477]: time="2026-04-13T20:17:26.866375287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.871738 containerd[1477]: time="2026-04-13T20:17:26.867707391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.871738 containerd[1477]: time="2026-04-13T20:17:26.869308166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.873235 containerd[1477]: time="2026-04-13T20:17:26.873202438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c869df47-hltqb,Uid:81c6c9d7-1fb2-40ea-825b-f4755d87f2cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd\"" Apr 13 20:17:26.921043 systemd[1]: Started cri-containerd-852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c.scope - libcontainer container 852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c. Apr 13 20:17:26.927751 containerd[1477]: time="2026-04-13T20:17:26.915714716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:26.927751 containerd[1477]: time="2026-04-13T20:17:26.915764677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:26.927751 containerd[1477]: time="2026-04-13T20:17:26.915788917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.927751 containerd[1477]: time="2026-04-13T20:17:26.915962827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:26.935527 containerd[1477]: time="2026-04-13T20:17:26.934486113Z" level=info msg="CreateContainer within sandbox \"a34336f214a402c755bb4e265959a7eb133f4dc17e32beb9c584202e0678aef9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2d2f4c8686a18fe2509735e6e005c50af252b41b8624b0214a8608f03378526\"" Apr 13 20:17:26.935857 containerd[1477]: time="2026-04-13T20:17:26.935725277Z" level=info msg="StartContainer for \"f2d2f4c8686a18fe2509735e6e005c50af252b41b8624b0214a8608f03378526\"" Apr 13 20:17:26.951746 containerd[1477]: time="2026-04-13T20:17:26.951699286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c869df47-v98s4,Uid:a1b73c5c-4269-4782-9519-ef2714c0260c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4\"" Apr 13 20:17:26.961000 systemd[1]: Started cri-containerd-b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458.scope - libcontainer container b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458. Apr 13 20:17:27.003498 systemd[1]: Started cri-containerd-f2d2f4c8686a18fe2509735e6e005c50af252b41b8624b0214a8608f03378526.scope - libcontainer container f2d2f4c8686a18fe2509735e6e005c50af252b41b8624b0214a8608f03378526. Apr 13 20:17:27.015994 containerd[1477]: time="2026-04-13T20:17:27.015963908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xbkt6,Uid:71160270-58d3-403b-af73-5d23d46c4986,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458\"" Apr 13 20:17:27.031398 containerd[1477]: time="2026-04-13T20:17:27.031303122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q4k2r,Uid:5d376255-a77b-4d63-b495-936717629436,Namespace:calico-system,Attempt:0,} returns sandbox id \"852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c\"" Apr 13 20:17:27.058482 containerd[1477]: time="2026-04-13T20:17:27.058436009Z" level=info msg="StartContainer for \"f2d2f4c8686a18fe2509735e6e005c50af252b41b8624b0214a8608f03378526\" returns successfully" Apr 13 20:17:27.163274 kubelet[2567]: E0413 20:17:27.163235 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:27.165960 kubelet[2567]: E0413 20:17:27.165935 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:27.173652 kubelet[2567]: I0413 20:17:27.173499 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:27.206313 kubelet[2567]: I0413 20:17:27.206239 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7bgbg" podStartSLOduration=26.206223864 podStartE2EDuration="26.206223864s" podCreationTimestamp="2026-04-13 20:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:17:27.186782208 +0000 UTC m=+31.250484244" watchObservedRunningTime="2026-04-13 20:17:27.206223864 +0000 UTC m=+31.269925900" Apr 13 20:17:27.371091 systemd-networkd[1380]: calic820fe5192c: Gained IPv6LL Apr 13 20:17:27.626077 systemd-networkd[1380]: cali3807590b1a1: Gained IPv6LL Apr 13 20:17:27.690423 systemd-networkd[1380]: cali0a7ae762185: Gained IPv6LL Apr 13 20:17:27.731948 kernel: calico-node[4111]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:17:27.946953 systemd-networkd[1380]: cali86d47cf2cfe: Gained IPv6LL Apr 13 20:17:27.948365 systemd-networkd[1380]: cali41351481538: Gained IPv6LL Apr 13 20:17:28.011136 systemd-networkd[1380]: cali362adafce7f: Gained IPv6LL Apr 13 20:17:28.012373 systemd-networkd[1380]: calibae3ea8f99d: Gained IPv6LL Apr 13 20:17:28.176946 kubelet[2567]: E0413 20:17:28.176916 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:28.177687 kubelet[2567]: E0413 20:17:28.177281 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:28.202922 systemd-networkd[1380]: cali180435fd7f7: Gained IPv6LL Apr 13 20:17:28.491883 systemd-networkd[1380]: vxlan.calico: Link UP Apr 13 20:17:28.491888 systemd-networkd[1380]: vxlan.calico: Gained carrier Apr 13 20:17:29.182036 kubelet[2567]: E0413 20:17:29.181706 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:29.184543 kubelet[2567]: E0413 20:17:29.183062 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:17:29.529121 containerd[1477]: time="2026-04-13T20:17:29.529056177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:29.530049 containerd[1477]: time="2026-04-13T20:17:29.529921119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:17:29.531773 containerd[1477]: time="2026-04-13T20:17:29.530571620Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:29.532861 containerd[1477]: time="2026-04-13T20:17:29.532494825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:29.533387 containerd[1477]: time="2026-04-13T20:17:29.533265598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.058125687s" Apr 13 20:17:29.533387 containerd[1477]: time="2026-04-13T20:17:29.533292998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:17:29.536595 containerd[1477]: time="2026-04-13T20:17:29.536438565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:17:29.551140 containerd[1477]: time="2026-04-13T20:17:29.551098904Z" level=info msg="CreateContainer within sandbox \"4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:17:29.564471 containerd[1477]: time="2026-04-13T20:17:29.563626506Z" level=info msg="CreateContainer within sandbox \"4a7f75abcc165374849558c34f0d1a0231b9fddbb57139c2c559af468e6850c8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b910b05952a7bc38285f71e150fd5949d622a8de51e2e1b55de368b14ad50bac\"" Apr 13 20:17:29.564806 containerd[1477]: time="2026-04-13T20:17:29.564773490Z" level=info msg="StartContainer for \"b910b05952a7bc38285f71e150fd5949d622a8de51e2e1b55de368b14ad50bac\"" Apr 13 20:17:29.621022 systemd[1]: Started cri-containerd-b910b05952a7bc38285f71e150fd5949d622a8de51e2e1b55de368b14ad50bac.scope - libcontainer container b910b05952a7bc38285f71e150fd5949d622a8de51e2e1b55de368b14ad50bac. Apr 13 20:17:29.667083 containerd[1477]: time="2026-04-13T20:17:29.667006904Z" level=info msg="StartContainer for \"b910b05952a7bc38285f71e150fd5949d622a8de51e2e1b55de368b14ad50bac\" returns successfully" Apr 13 20:17:29.867126 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Apr 13 20:17:30.197527 kubelet[2567]: I0413 20:17:30.197334 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t5cs7" podStartSLOduration=29.197320639 podStartE2EDuration="29.197320639s" podCreationTimestamp="2026-04-13 20:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:17:27.228432958 +0000 UTC m=+31.292134994" watchObservedRunningTime="2026-04-13 20:17:30.197320639 +0000 UTC m=+34.261022675" Apr 13 20:17:30.431390 containerd[1477]: time="2026-04-13T20:17:30.431343914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:30.432140 containerd[1477]: time="2026-04-13T20:17:30.432098025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:17:30.432798 containerd[1477]: time="2026-04-13T20:17:30.432745306Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:30.434751 containerd[1477]: time="2026-04-13T20:17:30.434712911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:30.435452 containerd[1477]: time="2026-04-13T20:17:30.435346913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 898.880457ms" Apr 13 20:17:30.435452 containerd[1477]: time="2026-04-13T20:17:30.435374264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:17:30.437004 containerd[1477]: time="2026-04-13T20:17:30.436972047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:17:30.439624 containerd[1477]: time="2026-04-13T20:17:30.439596244Z" level=info msg="CreateContainer within sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:17:30.462726 containerd[1477]: time="2026-04-13T20:17:30.462282869Z" level=info msg="CreateContainer within sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\"" Apr 13 20:17:30.463985 containerd[1477]: time="2026-04-13T20:17:30.462968481Z" level=info msg="StartContainer for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\"" Apr 13 20:17:30.491971 systemd[1]: Started cri-containerd-bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b.scope - libcontainer container bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b. Apr 13 20:17:30.544276 containerd[1477]: time="2026-04-13T20:17:30.544245820Z" level=info msg="StartContainer for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" returns successfully" Apr 13 20:17:31.189634 kubelet[2567]: I0413 20:17:31.189599 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:32.124638 containerd[1477]: time="2026-04-13T20:17:32.124582258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:32.125757 containerd[1477]: time="2026-04-13T20:17:32.125623320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:17:32.126343 containerd[1477]: time="2026-04-13T20:17:32.126294261Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:32.129006 containerd[1477]: time="2026-04-13T20:17:32.128915488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:32.130952 containerd[1477]: time="2026-04-13T20:17:32.130454651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.693341514s" Apr 13 20:17:32.130952 containerd[1477]: time="2026-04-13T20:17:32.130497761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:17:32.135344 containerd[1477]: time="2026-04-13T20:17:32.134767421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:17:32.143633 containerd[1477]: time="2026-04-13T20:17:32.143598840Z" level=info msg="CreateContainer within sandbox \"7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:17:32.157865 containerd[1477]: time="2026-04-13T20:17:32.157814981Z" level=info msg="CreateContainer within sandbox \"7f650aa4e0b2e7fcee8650bb9b6ade72571f923f14bb0def3ec717c8c1ce1edd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3ab3ddcb05539b0d9d9e63a2e30e46bc6402397a6e360e9ed48948267585c64\"" Apr 13 20:17:32.159743 containerd[1477]: time="2026-04-13T20:17:32.159708616Z" level=info msg="StartContainer for \"d3ab3ddcb05539b0d9d9e63a2e30e46bc6402397a6e360e9ed48948267585c64\"" Apr 13 20:17:32.201114 systemd[1]: Started cri-containerd-d3ab3ddcb05539b0d9d9e63a2e30e46bc6402397a6e360e9ed48948267585c64.scope - libcontainer container d3ab3ddcb05539b0d9d9e63a2e30e46bc6402397a6e360e9ed48948267585c64. Apr 13 20:17:32.255472 containerd[1477]: time="2026-04-13T20:17:32.255375668Z" level=info msg="StartContainer for \"d3ab3ddcb05539b0d9d9e63a2e30e46bc6402397a6e360e9ed48948267585c64\" returns successfully" Apr 13 20:17:32.330009 containerd[1477]: time="2026-04-13T20:17:32.329951693Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:32.330542 containerd[1477]: time="2026-04-13T20:17:32.330514554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:17:32.332904 containerd[1477]: time="2026-04-13T20:17:32.332882000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 198.081259ms" Apr 13 20:17:32.333073 containerd[1477]: time="2026-04-13T20:17:32.332988510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:17:32.335317 containerd[1477]: time="2026-04-13T20:17:32.334307073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:17:32.339896 containerd[1477]: time="2026-04-13T20:17:32.339874245Z" level=info msg="CreateContainer within sandbox \"e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:17:32.363373 containerd[1477]: time="2026-04-13T20:17:32.362926277Z" level=info msg="CreateContainer within sandbox \"e7d7e6003179b5e48ac00eb6cb83843b4812fb7413a6731ca6f8e889a0ad40e4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf9bd6e0dd922e80d397b9e34a68d3c23861e66f8ca3b7202709e0674de00139\"" Apr 13 20:17:32.366925 containerd[1477]: time="2026-04-13T20:17:32.365681443Z" level=info msg="StartContainer for \"bf9bd6e0dd922e80d397b9e34a68d3c23861e66f8ca3b7202709e0674de00139\"" Apr 13 20:17:32.421975 systemd[1]: Started cri-containerd-bf9bd6e0dd922e80d397b9e34a68d3c23861e66f8ca3b7202709e0674de00139.scope - libcontainer container bf9bd6e0dd922e80d397b9e34a68d3c23861e66f8ca3b7202709e0674de00139. Apr 13 20:17:32.485415 containerd[1477]: time="2026-04-13T20:17:32.485336888Z" level=info msg="StartContainer for \"bf9bd6e0dd922e80d397b9e34a68d3c23861e66f8ca3b7202709e0674de00139\" returns successfully" Apr 13 20:17:33.268645 kubelet[2567]: I0413 20:17:33.268599 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79dcc8dcc6-bdhfl" podStartSLOduration=16.20619178 podStartE2EDuration="19.266891363s" podCreationTimestamp="2026-04-13 20:17:14 +0000 UTC" firstStartedPulling="2026-04-13 20:17:26.473661717 +0000 UTC m=+30.537363753" lastFinishedPulling="2026-04-13 20:17:29.5343613 +0000 UTC m=+33.598063336" observedRunningTime="2026-04-13 20:17:30.199122224 +0000 UTC m=+34.262824270" watchObservedRunningTime="2026-04-13 20:17:33.266891363 +0000 UTC m=+37.330593399" Apr 13 20:17:33.272321 kubelet[2567]: I0413 20:17:33.271870 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59c869df47-hltqb" podStartSLOduration=15.013822701 podStartE2EDuration="20.271859144s" podCreationTimestamp="2026-04-13 20:17:13 +0000 UTC" firstStartedPulling="2026-04-13 20:17:26.875066174 +0000 UTC m=+30.938768220" lastFinishedPulling="2026-04-13 20:17:32.133102627 +0000 UTC m=+36.196804663" observedRunningTime="2026-04-13 20:17:33.251955522 +0000 UTC m=+37.315657568" watchObservedRunningTime="2026-04-13 20:17:33.271859144 +0000 UTC m=+37.335561180" Apr 13 20:17:33.278443 kubelet[2567]: I0413 20:17:33.278388 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59c869df47-v98s4" podStartSLOduration=14.898137707 podStartE2EDuration="20.278378978s" podCreationTimestamp="2026-04-13 20:17:13 +0000 UTC" firstStartedPulling="2026-04-13 20:17:26.953618031 +0000 UTC m=+31.017320067" lastFinishedPulling="2026-04-13 20:17:32.333859302 +0000 UTC m=+36.397561338" observedRunningTime="2026-04-13 20:17:33.277597046 +0000 UTC m=+37.341299082" watchObservedRunningTime="2026-04-13 20:17:33.278378978 +0000 UTC m=+37.342081014" Apr 13 20:17:33.581717 containerd[1477]: time="2026-04-13T20:17:33.581444388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:33.583409 containerd[1477]: time="2026-04-13T20:17:33.583375591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:17:33.585690 containerd[1477]: time="2026-04-13T20:17:33.584283433Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:33.587288 containerd[1477]: time="2026-04-13T20:17:33.587257469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:33.589605 containerd[1477]: time="2026-04-13T20:17:33.589573925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.255238372s" Apr 13 20:17:33.589739 containerd[1477]: time="2026-04-13T20:17:33.589701155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:17:33.592638 containerd[1477]: time="2026-04-13T20:17:33.592609501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:17:33.615249 containerd[1477]: time="2026-04-13T20:17:33.615219968Z" level=info msg="CreateContainer within sandbox \"b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:17:33.636356 containerd[1477]: time="2026-04-13T20:17:33.636305963Z" level=info msg="CreateContainer within sandbox \"b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9813c1af0be3f162cc55f243d41390111ba80889efe3483d372fd1be3683f38d\"" Apr 13 20:17:33.637101 containerd[1477]: time="2026-04-13T20:17:33.637071164Z" level=info msg="StartContainer for \"9813c1af0be3f162cc55f243d41390111ba80889efe3483d372fd1be3683f38d\"" Apr 13 20:17:33.677288 systemd[1]: Started cri-containerd-9813c1af0be3f162cc55f243d41390111ba80889efe3483d372fd1be3683f38d.scope - libcontainer container 9813c1af0be3f162cc55f243d41390111ba80889efe3483d372fd1be3683f38d. Apr 13 20:17:33.736341 containerd[1477]: time="2026-04-13T20:17:33.736295624Z" level=info msg="StartContainer for \"9813c1af0be3f162cc55f243d41390111ba80889efe3483d372fd1be3683f38d\" returns successfully" Apr 13 20:17:34.212022 kubelet[2567]: I0413 20:17:34.211997 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:34.212517 kubelet[2567]: I0413 20:17:34.212187 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:34.989052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088196016.mount: Deactivated successfully. Apr 13 20:17:35.307963 containerd[1477]: time="2026-04-13T20:17:35.307913680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:35.308890 containerd[1477]: time="2026-04-13T20:17:35.308826243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:17:35.309432 containerd[1477]: time="2026-04-13T20:17:35.309390734Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:35.311454 containerd[1477]: time="2026-04-13T20:17:35.311413967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:35.312574 containerd[1477]: time="2026-04-13T20:17:35.312143829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.719414607s" Apr 13 20:17:35.312574 containerd[1477]: time="2026-04-13T20:17:35.312171419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:17:35.314671 containerd[1477]: time="2026-04-13T20:17:35.314642934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:17:35.316959 containerd[1477]: time="2026-04-13T20:17:35.316811938Z" level=info msg="CreateContainer within sandbox \"852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:17:35.343022 containerd[1477]: time="2026-04-13T20:17:35.342977178Z" level=info msg="CreateContainer within sandbox \"852b2a3a262f914c3a2aa9ba4abf85a73f461e52d4a11fd5c110a04b93d1bf9c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5bf215d04fa09c1573c2eb1df4b546c88e2a363ca26250963581c4d46c4d3e56\"" Apr 13 20:17:35.343618 containerd[1477]: time="2026-04-13T20:17:35.343580389Z" level=info msg="StartContainer for \"5bf215d04fa09c1573c2eb1df4b546c88e2a363ca26250963581c4d46c4d3e56\"" Apr 13 20:17:35.387961 systemd[1]: Started cri-containerd-5bf215d04fa09c1573c2eb1df4b546c88e2a363ca26250963581c4d46c4d3e56.scope - libcontainer container 5bf215d04fa09c1573c2eb1df4b546c88e2a363ca26250963581c4d46c4d3e56. Apr 13 20:17:35.433350 containerd[1477]: time="2026-04-13T20:17:35.433298851Z" level=info msg="StartContainer for \"5bf215d04fa09c1573c2eb1df4b546c88e2a363ca26250963581c4d46c4d3e56\" returns successfully" Apr 13 20:17:36.234946 kubelet[2567]: I0413 20:17:36.234865 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-q4k2r" podStartSLOduration=14.959187924 podStartE2EDuration="23.234109124s" podCreationTimestamp="2026-04-13 20:17:13 +0000 UTC" firstStartedPulling="2026-04-13 20:17:27.038046261 +0000 UTC m=+31.101748297" lastFinishedPulling="2026-04-13 20:17:35.312967461 +0000 UTC m=+39.376669497" observedRunningTime="2026-04-13 20:17:36.233496444 +0000 UTC m=+40.297198500" watchObservedRunningTime="2026-04-13 20:17:36.234109124 +0000 UTC m=+40.297811160" Apr 13 20:17:36.431099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747142914.mount: Deactivated successfully. Apr 13 20:17:36.442624 containerd[1477]: time="2026-04-13T20:17:36.442582495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:36.443576 containerd[1477]: time="2026-04-13T20:17:36.443541747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:17:36.444169 containerd[1477]: time="2026-04-13T20:17:36.444128828Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:36.445993 containerd[1477]: time="2026-04-13T20:17:36.445957382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:36.447144 containerd[1477]: time="2026-04-13T20:17:36.446604282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.131929738s" Apr 13 20:17:36.447144 containerd[1477]: time="2026-04-13T20:17:36.446631112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:17:36.449768 containerd[1477]: time="2026-04-13T20:17:36.449750439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:17:36.452549 containerd[1477]: time="2026-04-13T20:17:36.452524293Z" level=info msg="CreateContainer within sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:17:36.468130 containerd[1477]: time="2026-04-13T20:17:36.467642021Z" level=info msg="CreateContainer within sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\"" Apr 13 20:17:36.470473 containerd[1477]: time="2026-04-13T20:17:36.470448816Z" level=info msg="StartContainer for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\"" Apr 13 20:17:36.471095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4006766044.mount: Deactivated successfully. Apr 13 20:17:36.512986 systemd[1]: Started cri-containerd-bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5.scope - libcontainer container bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5. Apr 13 20:17:36.554637 containerd[1477]: time="2026-04-13T20:17:36.554596280Z" level=info msg="StartContainer for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" returns successfully" Apr 13 20:17:37.221582 kubelet[2567]: I0413 20:17:37.221552 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:37.223892 containerd[1477]: time="2026-04-13T20:17:37.222903983Z" level=info msg="StopContainer for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" with timeout 30 (s)" Apr 13 20:17:37.224046 containerd[1477]: time="2026-04-13T20:17:37.224029135Z" level=info msg="StopContainer for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" with timeout 30 (s)" Apr 13 20:17:37.224875 containerd[1477]: time="2026-04-13T20:17:37.224659336Z" level=info msg="Stop container \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" with signal terminated" Apr 13 20:17:37.226130 containerd[1477]: time="2026-04-13T20:17:37.226070079Z" level=info msg="Stop container \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" with signal terminated" Apr 13 20:17:37.241346 kubelet[2567]: I0413 20:17:37.240703 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7496d95f5c-6h5ps" podStartSLOduration=12.289757597 podStartE2EDuration="22.240689064s" podCreationTimestamp="2026-04-13 20:17:15 +0000 UTC" firstStartedPulling="2026-04-13 20:17:26.497393869 +0000 UTC m=+30.561095915" lastFinishedPulling="2026-04-13 20:17:36.448325346 +0000 UTC m=+40.512027382" observedRunningTime="2026-04-13 20:17:37.239332101 +0000 UTC m=+41.303034137" watchObservedRunningTime="2026-04-13 20:17:37.240689064 +0000 UTC m=+41.304391100" Apr 13 20:17:37.254771 systemd[1]: cri-containerd-bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5.scope: Deactivated successfully. Apr 13 20:17:37.266974 systemd[1]: cri-containerd-bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b.scope: Deactivated successfully. Apr 13 20:17:37.311457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5-rootfs.mount: Deactivated successfully. Apr 13 20:17:37.313415 containerd[1477]: time="2026-04-13T20:17:37.313052560Z" level=info msg="shim disconnected" id=bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b namespace=k8s.io Apr 13 20:17:37.313415 containerd[1477]: time="2026-04-13T20:17:37.313108440Z" level=warning msg="cleaning up after shim disconnected" id=bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b namespace=k8s.io Apr 13 20:17:37.313415 containerd[1477]: time="2026-04-13T20:17:37.313117440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:17:37.316536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b-rootfs.mount: Deactivated successfully. Apr 13 20:17:37.358250 containerd[1477]: time="2026-04-13T20:17:37.358180619Z" level=info msg="shim disconnected" id=bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5 namespace=k8s.io Apr 13 20:17:37.358250 containerd[1477]: time="2026-04-13T20:17:37.358227939Z" level=warning msg="cleaning up after shim disconnected" id=bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5 namespace=k8s.io Apr 13 20:17:37.358250 containerd[1477]: time="2026-04-13T20:17:37.358237489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:17:37.383081 containerd[1477]: time="2026-04-13T20:17:37.382999172Z" level=info msg="StopContainer for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" returns successfully" Apr 13 20:17:37.392967 containerd[1477]: time="2026-04-13T20:17:37.392810619Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:17:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:17:37.400240 containerd[1477]: time="2026-04-13T20:17:37.400195963Z" level=info msg="StopContainer for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" returns successfully" Apr 13 20:17:37.401249 containerd[1477]: time="2026-04-13T20:17:37.401221094Z" level=info msg="StopPodSandbox for \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\"" Apr 13 20:17:37.401350 containerd[1477]: time="2026-04-13T20:17:37.401262644Z" level=info msg="Container to stop \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:17:37.401350 containerd[1477]: time="2026-04-13T20:17:37.401278864Z" level=info msg="Container to stop \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:17:37.407650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907-shm.mount: Deactivated successfully. Apr 13 20:17:37.423862 systemd[1]: cri-containerd-8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907.scope: Deactivated successfully. Apr 13 20:17:37.470776 containerd[1477]: time="2026-04-13T20:17:37.470707605Z" level=info msg="shim disconnected" id=8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907 namespace=k8s.io Apr 13 20:17:37.470776 containerd[1477]: time="2026-04-13T20:17:37.470772815Z" level=warning msg="cleaning up after shim disconnected" id=8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907 namespace=k8s.io Apr 13 20:17:37.471303 containerd[1477]: time="2026-04-13T20:17:37.470785595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:17:37.472090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907-rootfs.mount: Deactivated successfully. Apr 13 20:17:37.498571 containerd[1477]: time="2026-04-13T20:17:37.498504744Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:17:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:17:37.529925 containerd[1477]: time="2026-04-13T20:17:37.529884328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:37.531116 containerd[1477]: time="2026-04-13T20:17:37.531084290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:17:37.531594 containerd[1477]: time="2026-04-13T20:17:37.531557561Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:37.535376 containerd[1477]: time="2026-04-13T20:17:37.535351628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:17:37.536434 containerd[1477]: time="2026-04-13T20:17:37.536313380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.086451421s" Apr 13 20:17:37.536434 containerd[1477]: time="2026-04-13T20:17:37.536351130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:17:37.541587 containerd[1477]: time="2026-04-13T20:17:37.541462348Z" level=info msg="CreateContainer within sandbox \"b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:17:37.565805 containerd[1477]: time="2026-04-13T20:17:37.565767861Z" level=info msg="CreateContainer within sandbox \"b5763eaab433d97e885c1c6dcf1c300ff8296e94d6672a035e994e06cfa45458\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03dac4fb51d96b3546c82bd5a1f4ee61e8e06b677a58c7898a43d8b21cfccc3d\"" Apr 13 20:17:37.566459 containerd[1477]: time="2026-04-13T20:17:37.566381382Z" level=info msg="StartContainer for \"03dac4fb51d96b3546c82bd5a1f4ee61e8e06b677a58c7898a43d8b21cfccc3d\"" Apr 13 20:17:37.586451 systemd-networkd[1380]: calic820fe5192c: Link DOWN Apr 13 20:17:37.586459 systemd-networkd[1380]: calic820fe5192c: Lost carrier Apr 13 20:17:37.620260 systemd[1]: Started cri-containerd-03dac4fb51d96b3546c82bd5a1f4ee61e8e06b677a58c7898a43d8b21cfccc3d.scope - libcontainer container 03dac4fb51d96b3546c82bd5a1f4ee61e8e06b677a58c7898a43d8b21cfccc3d. Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.582 [INFO][4760] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.585 [INFO][4760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" iface="eth0" netns="/var/run/netns/cni-3b8d1b37-8fe0-2c57-63c5-92d42884897c" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.585 [INFO][4760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" iface="eth0" netns="/var/run/netns/cni-3b8d1b37-8fe0-2c57-63c5-92d42884897c" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.593 [INFO][4760] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" after=7.853123ms iface="eth0" netns="/var/run/netns/cni-3b8d1b37-8fe0-2c57-63c5-92d42884897c" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.593 [INFO][4760] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.593 [INFO][4760] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.641 [INFO][4782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.641 [INFO][4782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.641 [INFO][4782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.684 [INFO][4782] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.684 [INFO][4782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.686 [INFO][4782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:37.695035 containerd[1477]: 2026-04-13 20:17:37.690 [INFO][4760] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:37.697217 containerd[1477]: time="2026-04-13T20:17:37.697169170Z" level=info msg="TearDown network for sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" successfully" Apr 13 20:17:37.697346 containerd[1477]: time="2026-04-13T20:17:37.697217770Z" level=info msg="StopPodSandbox for \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" returns successfully" Apr 13 20:17:37.730352 containerd[1477]: time="2026-04-13T20:17:37.730232287Z" level=info msg="StartContainer for \"03dac4fb51d96b3546c82bd5a1f4ee61e8e06b677a58c7898a43d8b21cfccc3d\" returns successfully" Apr 13 20:17:37.820069 kubelet[2567]: I0413 20:17:37.820013 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56q9l\" (UniqueName: \"kubernetes.io/projected/2e87e8c8-8e00-4dcc-840e-6674326e8d34-kube-api-access-56q9l\") pod \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " Apr 13 20:17:37.820069 kubelet[2567]: I0413 20:17:37.820071 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-nginx-config\") pod \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " Apr 13 20:17:37.820323 kubelet[2567]: I0413 20:17:37.820088 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-ca-bundle\") pod \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " Apr 13 20:17:37.820323 kubelet[2567]: I0413 20:17:37.820109 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-backend-key-pair\") pod \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\" (UID: \"2e87e8c8-8e00-4dcc-840e-6674326e8d34\") " Apr 13 20:17:37.821475 kubelet[2567]: I0413 20:17:37.821122 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "2e87e8c8-8e00-4dcc-840e-6674326e8d34" (UID: "2e87e8c8-8e00-4dcc-840e-6674326e8d34"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:17:37.823473 kubelet[2567]: I0413 20:17:37.823451 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2e87e8c8-8e00-4dcc-840e-6674326e8d34" (UID: "2e87e8c8-8e00-4dcc-840e-6674326e8d34"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:17:37.825432 kubelet[2567]: I0413 20:17:37.825410 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2e87e8c8-8e00-4dcc-840e-6674326e8d34" (UID: "2e87e8c8-8e00-4dcc-840e-6674326e8d34"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:17:37.826071 kubelet[2567]: I0413 20:17:37.826024 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e87e8c8-8e00-4dcc-840e-6674326e8d34-kube-api-access-56q9l" (OuterVolumeSpecName: "kube-api-access-56q9l") pod "2e87e8c8-8e00-4dcc-840e-6674326e8d34" (UID: "2e87e8c8-8e00-4dcc-840e-6674326e8d34"). InnerVolumeSpecName "kube-api-access-56q9l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:17:37.920651 kubelet[2567]: I0413 20:17:37.920578 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-56q9l\" (UniqueName: \"kubernetes.io/projected/2e87e8c8-8e00-4dcc-840e-6674326e8d34-kube-api-access-56q9l\") on node \"172-234-25-54\" DevicePath \"\"" Apr 13 20:17:37.920651 kubelet[2567]: I0413 20:17:37.920607 2567 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-nginx-config\") on node \"172-234-25-54\" DevicePath \"\"" Apr 13 20:17:37.920651 kubelet[2567]: I0413 20:17:37.920619 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-ca-bundle\") on node \"172-234-25-54\" DevicePath \"\"" Apr 13 20:17:37.920651 kubelet[2567]: I0413 20:17:37.920629 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e87e8c8-8e00-4dcc-840e-6674326e8d34-whisker-backend-key-pair\") on node \"172-234-25-54\" DevicePath \"\"" Apr 13 20:17:38.049876 systemd[1]: Removed slice kubepods-besteffort-pod2e87e8c8_8e00_4dcc_840e_6674326e8d34.slice - libcontainer container kubepods-besteffort-pod2e87e8c8_8e00_4dcc_840e_6674326e8d34.slice. Apr 13 20:17:38.130999 kubelet[2567]: I0413 20:17:38.130780 2567 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:17:38.132112 kubelet[2567]: I0413 20:17:38.132084 2567 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:17:38.189020 systemd[1]: run-netns-cni\x2d3b8d1b37\x2d8fe0\x2d2c57\x2d63c5\x2d92d42884897c.mount: Deactivated successfully. Apr 13 20:17:38.189145 systemd[1]: var-lib-kubelet-pods-2e87e8c8\x2d8e00\x2d4dcc\x2d840e\x2d6674326e8d34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d56q9l.mount: Deactivated successfully. Apr 13 20:17:38.189225 systemd[1]: var-lib-kubelet-pods-2e87e8c8\x2d8e00\x2d4dcc\x2d840e\x2d6674326e8d34-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:17:38.227929 kubelet[2567]: I0413 20:17:38.227151 2567 scope.go:117] "RemoveContainer" containerID="bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5" Apr 13 20:17:38.229170 containerd[1477]: time="2026-04-13T20:17:38.229144460Z" level=info msg="RemoveContainer for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\"" Apr 13 20:17:38.235890 containerd[1477]: time="2026-04-13T20:17:38.235860211Z" level=info msg="RemoveContainer for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" returns successfully" Apr 13 20:17:38.236158 kubelet[2567]: I0413 20:17:38.236080 2567 scope.go:117] "RemoveContainer" containerID="bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b" Apr 13 20:17:38.238263 containerd[1477]: time="2026-04-13T20:17:38.238230785Z" level=info msg="RemoveContainer for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\"" Apr 13 20:17:38.243443 containerd[1477]: time="2026-04-13T20:17:38.243380054Z" level=info msg="RemoveContainer for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" returns successfully" Apr 13 20:17:38.244048 kubelet[2567]: I0413 20:17:38.243924 2567 scope.go:117] "RemoveContainer" containerID="bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5" Apr 13 20:17:38.244920 kubelet[2567]: E0413 20:17:38.244349 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\": not found" containerID="bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5" Apr 13 20:17:38.244920 kubelet[2567]: I0413 20:17:38.244371 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5"} err="failed to get container status \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\": not found" Apr 13 20:17:38.244920 kubelet[2567]: I0413 20:17:38.244436 2567 scope.go:117] "RemoveContainer" containerID="bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b" Apr 13 20:17:38.245053 containerd[1477]: time="2026-04-13T20:17:38.244083985Z" level=error msg="ContainerStatus for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\": not found" Apr 13 20:17:38.245053 containerd[1477]: time="2026-04-13T20:17:38.244956477Z" level=error msg="ContainerStatus for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\": not found" Apr 13 20:17:38.245133 kubelet[2567]: E0413 20:17:38.245047 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\": not found" containerID="bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b" Apr 13 20:17:38.245133 kubelet[2567]: I0413 20:17:38.245064 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b"} err="failed to get container status \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\": not found" Apr 13 20:17:38.245133 kubelet[2567]: I0413 20:17:38.245078 2567 scope.go:117] "RemoveContainer" containerID="bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5" Apr 13 20:17:38.247318 containerd[1477]: time="2026-04-13T20:17:38.245647167Z" level=error msg="ContainerStatus for \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\": not found" Apr 13 20:17:38.247318 containerd[1477]: time="2026-04-13T20:17:38.247097140Z" level=error msg="ContainerStatus for \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\": not found" Apr 13 20:17:38.247603 kubelet[2567]: I0413 20:17:38.245976 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xbkt6" podStartSLOduration=14.730975794999999 podStartE2EDuration="25.245912378s" podCreationTimestamp="2026-04-13 20:17:13 +0000 UTC" firstStartedPulling="2026-04-13 20:17:27.023101929 +0000 UTC m=+31.086803975" lastFinishedPulling="2026-04-13 20:17:37.538038512 +0000 UTC m=+41.601740558" observedRunningTime="2026-04-13 20:17:38.242053031 +0000 UTC m=+42.305755117" watchObservedRunningTime="2026-04-13 20:17:38.245912378 +0000 UTC m=+42.309614424" Apr 13 20:17:38.247603 kubelet[2567]: I0413 20:17:38.246316 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5"} err="failed to get container status \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdaf83acd976882fdf8de63beebfa09a5adf1fadee5124c71452b2ffc5af8aa5\": not found" Apr 13 20:17:38.247603 kubelet[2567]: I0413 20:17:38.246330 2567 scope.go:117] "RemoveContainer" containerID="bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b" Apr 13 20:17:38.248103 kubelet[2567]: I0413 20:17:38.247642 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b"} err="failed to get container status \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcbe76259d84d042871dec5d6ac225eff8d485692b16495117b3718e9ceff50b\": not found" Apr 13 20:17:38.297543 systemd[1]: Created slice kubepods-besteffort-pod0c6c1e72_0375_4a79_a6ba_68ae51d8f3bb.slice - libcontainer container kubepods-besteffort-pod0c6c1e72_0375_4a79_a6ba_68ae51d8f3bb.slice. Apr 13 20:17:38.323237 kubelet[2567]: I0413 20:17:38.323103 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb-whisker-ca-bundle\") pod \"whisker-775fcb6fdb-xvlb2\" (UID: \"0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb\") " pod="calico-system/whisker-775fcb6fdb-xvlb2" Apr 13 20:17:38.323237 kubelet[2567]: I0413 20:17:38.323149 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb-whisker-backend-key-pair\") pod \"whisker-775fcb6fdb-xvlb2\" (UID: \"0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb\") " pod="calico-system/whisker-775fcb6fdb-xvlb2" Apr 13 20:17:38.323237 kubelet[2567]: I0413 20:17:38.323177 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2k8\" (UniqueName: \"kubernetes.io/projected/0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb-kube-api-access-vs2k8\") pod \"whisker-775fcb6fdb-xvlb2\" (UID: \"0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb\") " pod="calico-system/whisker-775fcb6fdb-xvlb2" Apr 13 20:17:38.323237 kubelet[2567]: I0413 20:17:38.323197 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb-nginx-config\") pod \"whisker-775fcb6fdb-xvlb2\" (UID: \"0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb\") " pod="calico-system/whisker-775fcb6fdb-xvlb2" Apr 13 20:17:38.604519 containerd[1477]: time="2026-04-13T20:17:38.604376015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775fcb6fdb-xvlb2,Uid:0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb,Namespace:calico-system,Attempt:0,}" Apr 13 20:17:38.721536 systemd-networkd[1380]: cali4573fa42c35: Link UP Apr 13 20:17:38.724194 systemd-networkd[1380]: cali4573fa42c35: Gained carrier Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.654 [INFO][4844] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0 whisker-775fcb6fdb- calico-system 0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb 1025 0 2026-04-13 20:17:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:775fcb6fdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-25-54 whisker-775fcb6fdb-xvlb2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4573fa42c35 [] [] }} ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.655 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.682 [INFO][4855] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" HandleID="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Workload="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.690 [INFO][4855] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" HandleID="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Workload="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277e80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-25-54", "pod":"whisker-775fcb6fdb-xvlb2", "timestamp":"2026-04-13 20:17:38.682120595 +0000 UTC"}, Hostname:"172-234-25-54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001146e0)} Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.690 [INFO][4855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.690 [INFO][4855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.690 [INFO][4855] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-25-54' Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.692 [INFO][4855] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.696 [INFO][4855] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.700 [INFO][4855] ipam/ipam.go 526: Trying affinity for 192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.701 [INFO][4855] ipam/ipam.go 160: Attempting to load block cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.703 [INFO][4855] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.703 [INFO][4855] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.704 [INFO][4855] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418 Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.709 [INFO][4855] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.715 [INFO][4855] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.76.9/26] block=192.168.76.0/26 handle="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.715 [INFO][4855] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.76.9/26] handle="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" host="172-234-25-54" Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.715 [INFO][4855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:38.746144 containerd[1477]: 2026-04-13 20:17:38.715 [INFO][4855] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.76.9/26] IPv6=[] ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" HandleID="k8s-pod-network.826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Workload="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" Apr 13 20:17:38.748006 containerd[1477]: 2026-04-13 20:17:38.718 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0", GenerateName:"whisker-775fcb6fdb-", Namespace:"calico-system", SelfLink:"", UID:"0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775fcb6fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"", Pod:"whisker-775fcb6fdb-xvlb2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4573fa42c35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:38.748006 containerd[1477]: 2026-04-13 20:17:38.718 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.9/32] ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" Apr 13 20:17:38.748006 containerd[1477]: 2026-04-13 20:17:38.718 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4573fa42c35 ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" Apr 13 20:17:38.748006 containerd[1477]: 2026-04-13 20:17:38.720 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" Apr 13 20:17:38.748006 containerd[1477]: 2026-04-13 20:17:38.721 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0", GenerateName:"whisker-775fcb6fdb-", Namespace:"calico-system", SelfLink:"", UID:"0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 17, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775fcb6fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-25-54", ContainerID:"826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418", Pod:"whisker-775fcb6fdb-xvlb2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4573fa42c35", MAC:"06:7e:73:f5:af:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:17:38.748006 containerd[1477]: 2026-04-13 20:17:38.741 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418" Namespace="calico-system" Pod="whisker-775fcb6fdb-xvlb2" WorkloadEndpoint="172--234--25--54-k8s-whisker--775fcb6fdb--xvlb2-eth0" Apr 13 20:17:38.778044 containerd[1477]: time="2026-04-13T20:17:38.777718074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:17:38.778162 containerd[1477]: time="2026-04-13T20:17:38.778098275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:17:38.778207 containerd[1477]: time="2026-04-13T20:17:38.778154875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:38.779842 containerd[1477]: time="2026-04-13T20:17:38.778426645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:17:38.810967 systemd[1]: Started cri-containerd-826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418.scope - libcontainer container 826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418. Apr 13 20:17:38.910105 containerd[1477]: time="2026-04-13T20:17:38.910049634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775fcb6fdb-xvlb2,Uid:0c6c1e72-0375-4a79-a6ba-68ae51d8f3bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418\"" Apr 13 20:17:38.915300 containerd[1477]: time="2026-04-13T20:17:38.915273354Z" level=info msg="CreateContainer within sandbox \"826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:17:38.928446 containerd[1477]: time="2026-04-13T20:17:38.928407565Z" level=info msg="CreateContainer within sandbox \"826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"20bfcafb9d1899d73a79e11c820e20454e31b50053e1ec4353f4ab56360c187a\"" Apr 13 20:17:38.930009 containerd[1477]: time="2026-04-13T20:17:38.929052476Z" level=info msg="StartContainer for \"20bfcafb9d1899d73a79e11c820e20454e31b50053e1ec4353f4ab56360c187a\"" Apr 13 20:17:38.975034 systemd[1]: Started cri-containerd-20bfcafb9d1899d73a79e11c820e20454e31b50053e1ec4353f4ab56360c187a.scope - libcontainer container 20bfcafb9d1899d73a79e11c820e20454e31b50053e1ec4353f4ab56360c187a. Apr 13 20:17:39.038762 containerd[1477]: time="2026-04-13T20:17:39.038607166Z" level=info msg="StartContainer for \"20bfcafb9d1899d73a79e11c820e20454e31b50053e1ec4353f4ab56360c187a\" returns successfully" Apr 13 20:17:39.045862 containerd[1477]: time="2026-04-13T20:17:39.045759558Z" level=info msg="CreateContainer within sandbox \"826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:17:39.064170 containerd[1477]: time="2026-04-13T20:17:39.064120517Z" level=info msg="CreateContainer within sandbox \"826db430404c476bb366a47088b9966e915f51c0e083caa77a3898825b3e5418\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8cb2dc4d080c27367e569087fd3a4e4979e233c916f126893a6e683e2b823783\"" Apr 13 20:17:39.065574 containerd[1477]: time="2026-04-13T20:17:39.065535159Z" level=info msg="StartContainer for \"8cb2dc4d080c27367e569087fd3a4e4979e233c916f126893a6e683e2b823783\"" Apr 13 20:17:39.114065 systemd[1]: Started cri-containerd-8cb2dc4d080c27367e569087fd3a4e4979e233c916f126893a6e683e2b823783.scope - libcontainer container 8cb2dc4d080c27367e569087fd3a4e4979e233c916f126893a6e683e2b823783. Apr 13 20:17:39.203857 containerd[1477]: time="2026-04-13T20:17:39.203627219Z" level=info msg="StartContainer for \"8cb2dc4d080c27367e569087fd3a4e4979e233c916f126893a6e683e2b823783\" returns successfully" Apr 13 20:17:39.250076 kubelet[2567]: I0413 20:17:39.249211 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-775fcb6fdb-xvlb2" podStartSLOduration=1.249193031 podStartE2EDuration="1.249193031s" podCreationTimestamp="2026-04-13 20:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:17:39.24815381 +0000 UTC m=+43.311855846" watchObservedRunningTime="2026-04-13 20:17:39.249193031 +0000 UTC m=+43.312895077" Apr 13 20:17:40.043223 kubelet[2567]: I0413 20:17:40.043177 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e87e8c8-8e00-4dcc-840e-6674326e8d34" path="/var/lib/kubelet/pods/2e87e8c8-8e00-4dcc-840e-6674326e8d34/volumes" Apr 13 20:17:40.106426 systemd-networkd[1380]: cali4573fa42c35: Gained IPv6LL Apr 13 20:17:42.562220 kubelet[2567]: I0413 20:17:42.562100 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:43.666406 kubelet[2567]: I0413 20:17:43.665821 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:43.798097 systemd[1]: run-containerd-runc-k8s.io-37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9-runc.L1yfOS.mount: Deactivated successfully. Apr 13 20:17:46.064465 systemd[1]: Started sshd@8-172.234.25.54:22-91.214.130.133:33612.service - OpenSSH per-connection server daemon (91.214.130.133:33612). Apr 13 20:17:47.375490 kubelet[2567]: I0413 20:17:47.375193 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:48.993759 kubelet[2567]: I0413 20:17:48.993403 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:17:49.015182 systemd[1]: run-containerd-runc-k8s.io-5bf215d04fa09c1573c2eb1df4b546c88e2a363ca26250963581c4d46c4d3e56-runc.inpNHH.mount: Deactivated successfully. Apr 13 20:17:56.049046 containerd[1477]: time="2026-04-13T20:17:56.048754470Z" level=info msg="StopPodSandbox for \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\"" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.084 [WARNING][5185] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.084 [INFO][5185] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.084 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" iface="eth0" netns="" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.084 [INFO][5185] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.084 [INFO][5185] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.105 [INFO][5192] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.105 [INFO][5192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.105 [INFO][5192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.110 [WARNING][5192] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.110 [INFO][5192] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.111 [INFO][5192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:56.122729 containerd[1477]: 2026-04-13 20:17:56.114 [INFO][5185] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.122729 containerd[1477]: time="2026-04-13T20:17:56.122168664Z" level=info msg="TearDown network for sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" successfully" Apr 13 20:17:56.122729 containerd[1477]: time="2026-04-13T20:17:56.122219074Z" level=info msg="StopPodSandbox for \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" returns successfully" Apr 13 20:17:56.122729 containerd[1477]: time="2026-04-13T20:17:56.122630854Z" level=info msg="RemovePodSandbox for \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\"" Apr 13 20:17:56.122729 containerd[1477]: time="2026-04-13T20:17:56.122655944Z" level=info msg="Forcibly stopping sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\"" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.161 [WARNING][5206] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" WorkloadEndpoint="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.161 [INFO][5206] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.161 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" iface="eth0" netns="" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.161 [INFO][5206] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.161 [INFO][5206] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.185 [INFO][5214] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.185 [INFO][5214] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.185 [INFO][5214] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.193 [WARNING][5214] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.193 [INFO][5214] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" HandleID="k8s-pod-network.8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Workload="172--234--25--54-k8s-whisker--7496d95f5c--6h5ps-eth0" Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.194 [INFO][5214] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:17:56.199282 containerd[1477]: 2026-04-13 20:17:56.197 [INFO][5206] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907" Apr 13 20:17:56.199687 containerd[1477]: time="2026-04-13T20:17:56.199323241Z" level=info msg="TearDown network for sandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" successfully" Apr 13 20:17:56.204033 containerd[1477]: time="2026-04-13T20:17:56.204002375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:17:56.204107 containerd[1477]: time="2026-04-13T20:17:56.204061625Z" level=info msg="RemovePodSandbox \"8c3e2b7d9c0b7044e6729000c10c72927efdbd43d423949fc831aa4925203907\" returns successfully" Apr 13 20:18:00.923167 systemd[1]: Started sshd@9-172.234.25.54:22-188.191.22.248:43920.service - OpenSSH per-connection server daemon (188.191.22.248:43920). Apr 13 20:18:00.972954 kubelet[2567]: I0413 20:18:00.972909 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:18:13.040642 kubelet[2567]: E0413 20:18:13.040591 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:18:13.782680 systemd[1]: run-containerd-runc-k8s.io-37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9-runc.AEgO1V.mount: Deactivated successfully. Apr 13 20:18:27.040908 kubelet[2567]: E0413 20:18:27.040854 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:18:29.041027 kubelet[2567]: E0413 20:18:29.040977 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:18:31.040585 kubelet[2567]: E0413 20:18:31.040363 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:18:33.474985 systemd[1]: Started sshd@10-172.234.25.54:22-95.104.170.14:2582.service - OpenSSH per-connection server daemon (95.104.170.14:2582). Apr 13 20:18:36.041199 kubelet[2567]: E0413 20:18:36.041038 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:18:42.643735 systemd[1]: run-containerd-runc-k8s.io-b910b05952a7bc38285f71e150fd5949d622a8de51e2e1b55de368b14ad50bac-runc.tBoimg.mount: Deactivated successfully. Apr 13 20:18:43.040816 kubelet[2567]: E0413 20:18:43.040782 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:18:46.041818 kubelet[2567]: E0413 20:18:46.041653 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:18:48.993048 systemd[1]: sshd@7-172.234.25.54:22-195.18.19.246:41012.service: Deactivated successfully. Apr 13 20:19:01.393992 systemd[1]: Started sshd@11-172.234.25.54:22-85.140.0.77:30763.service - OpenSSH per-connection server daemon (85.140.0.77:30763). Apr 13 20:19:13.781490 systemd[1]: run-containerd-runc-k8s.io-37e362b25bc012bfa64d2b2ff3b818cfc51b214e33a3580028ef883c7a25cab9-runc.qfAqmu.mount: Deactivated successfully. Apr 13 20:19:15.506044 systemd[1]: Started sshd@12-172.234.25.54:22-50.85.169.122:48246.service - OpenSSH per-connection server daemon (50.85.169.122:48246). Apr 13 20:19:16.211783 sshd[5551]: Accepted publickey for core from 50.85.169.122 port 48246 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:16.213879 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:16.219569 systemd-logind[1458]: New session 8 of user core. Apr 13 20:19:16.224969 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:19:16.778958 sshd[5551]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:16.782976 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:19:16.783918 systemd[1]: sshd@12-172.234.25.54:22-50.85.169.122:48246.service: Deactivated successfully. Apr 13 20:19:16.785838 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:19:16.787151 systemd-logind[1458]: Removed session 8. Apr 13 20:19:19.040989 kubelet[2567]: E0413 20:19:19.040447 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:19:21.907180 systemd[1]: Started sshd@13-172.234.25.54:22-50.85.169.122:53452.service - OpenSSH per-connection server daemon (50.85.169.122:53452). Apr 13 20:19:22.612066 sshd[5584]: Accepted publickey for core from 50.85.169.122 port 53452 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:22.615908 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:22.620254 systemd-logind[1458]: New session 9 of user core. Apr 13 20:19:22.627064 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:19:23.175693 sshd[5584]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:23.181308 systemd[1]: sshd@13-172.234.25.54:22-50.85.169.122:53452.service: Deactivated successfully. Apr 13 20:19:23.181809 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:19:23.184501 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:19:23.185383 systemd-logind[1458]: Removed session 9. Apr 13 20:19:28.300047 systemd[1]: Started sshd@14-172.234.25.54:22-50.85.169.122:53454.service - OpenSSH per-connection server daemon (50.85.169.122:53454). Apr 13 20:19:29.010308 sshd[5623]: Accepted publickey for core from 50.85.169.122 port 53454 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:29.012309 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:29.017446 systemd-logind[1458]: New session 10 of user core. Apr 13 20:19:29.022963 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:19:29.578468 sshd[5623]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:29.582084 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:19:29.582511 systemd[1]: sshd@14-172.234.25.54:22-50.85.169.122:53454.service: Deactivated successfully. Apr 13 20:19:29.586530 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:19:29.591633 systemd-logind[1458]: Removed session 10. Apr 13 20:19:29.700971 systemd[1]: Started sshd@15-172.234.25.54:22-50.85.169.122:45786.service - OpenSSH per-connection server daemon (50.85.169.122:45786). Apr 13 20:19:30.003732 systemd[1]: run-containerd-runc-k8s.io-b910b05952a7bc38285f71e150fd5949d622a8de51e2e1b55de368b14ad50bac-runc.YGKlkG.mount: Deactivated successfully. Apr 13 20:19:30.411647 sshd[5637]: Accepted publickey for core from 50.85.169.122 port 45786 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:30.413763 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:30.418499 systemd-logind[1458]: New session 11 of user core. Apr 13 20:19:30.421978 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:19:31.007982 sshd[5637]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:31.013339 systemd[1]: sshd@15-172.234.25.54:22-50.85.169.122:45786.service: Deactivated successfully. Apr 13 20:19:31.015376 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:19:31.018067 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:19:31.019957 systemd-logind[1458]: Removed session 11. Apr 13 20:19:31.040489 kubelet[2567]: E0413 20:19:31.040379 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:19:31.132235 systemd[1]: Started sshd@16-172.234.25.54:22-50.85.169.122:45794.service - OpenSSH per-connection server daemon (50.85.169.122:45794). Apr 13 20:19:31.845172 sshd[5667]: Accepted publickey for core from 50.85.169.122 port 45794 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:31.846792 sshd[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:31.850947 systemd-logind[1458]: New session 12 of user core. Apr 13 20:19:31.855983 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:19:32.406946 sshd[5667]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:32.411393 systemd[1]: sshd@16-172.234.25.54:22-50.85.169.122:45794.service: Deactivated successfully. Apr 13 20:19:32.414406 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:19:32.415239 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:19:32.416275 systemd-logind[1458]: Removed session 12. Apr 13 20:19:37.535177 systemd[1]: Started sshd@17-172.234.25.54:22-50.85.169.122:45806.service - OpenSSH per-connection server daemon (50.85.169.122:45806). Apr 13 20:19:38.245325 sshd[5682]: Accepted publickey for core from 50.85.169.122 port 45806 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:38.247514 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:38.252889 systemd-logind[1458]: New session 13 of user core. Apr 13 20:19:38.257965 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:19:38.804866 sshd[5682]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:38.808195 systemd[1]: sshd@17-172.234.25.54:22-50.85.169.122:45806.service: Deactivated successfully. Apr 13 20:19:38.811453 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:19:38.813202 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:19:38.814368 systemd-logind[1458]: Removed session 13. Apr 13 20:19:38.934233 systemd[1]: Started sshd@18-172.234.25.54:22-50.85.169.122:45816.service - OpenSSH per-connection server daemon (50.85.169.122:45816). Apr 13 20:19:39.636541 sshd[5696]: Accepted publickey for core from 50.85.169.122 port 45816 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:39.638155 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:39.642807 systemd-logind[1458]: New session 14 of user core. Apr 13 20:19:39.648972 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:19:40.411735 sshd[5696]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:40.417616 systemd[1]: sshd@18-172.234.25.54:22-50.85.169.122:45816.service: Deactivated successfully. Apr 13 20:19:40.420512 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:19:40.421526 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:19:40.422677 systemd-logind[1458]: Removed session 14. Apr 13 20:19:40.541105 systemd[1]: Started sshd@19-172.234.25.54:22-50.85.169.122:49314.service - OpenSSH per-connection server daemon (50.85.169.122:49314). Apr 13 20:19:41.255870 sshd[5707]: Accepted publickey for core from 50.85.169.122 port 49314 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:41.256775 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:41.266284 systemd-logind[1458]: New session 15 of user core. Apr 13 20:19:41.274154 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:19:42.044669 kubelet[2567]: E0413 20:19:42.044310 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:19:42.333721 sshd[5707]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:42.336867 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:19:42.337444 systemd[1]: sshd@19-172.234.25.54:22-50.85.169.122:49314.service: Deactivated successfully. Apr 13 20:19:42.341566 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:19:42.343292 systemd-logind[1458]: Removed session 15. Apr 13 20:19:42.456005 systemd[1]: Started sshd@20-172.234.25.54:22-50.85.169.122:49316.service - OpenSSH per-connection server daemon (50.85.169.122:49316). Apr 13 20:19:43.168406 sshd[5736]: Accepted publickey for core from 50.85.169.122 port 49316 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:43.169418 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:43.174819 systemd-logind[1458]: New session 16 of user core. Apr 13 20:19:43.179011 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:19:43.842323 sshd[5736]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:43.845778 systemd[1]: sshd@20-172.234.25.54:22-50.85.169.122:49316.service: Deactivated successfully. Apr 13 20:19:43.848314 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:19:43.850095 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:19:43.851178 systemd-logind[1458]: Removed session 16. Apr 13 20:19:43.967438 systemd[1]: Started sshd@21-172.234.25.54:22-50.85.169.122:49326.service - OpenSSH per-connection server daemon (50.85.169.122:49326). Apr 13 20:19:44.686918 sshd[5787]: Accepted publickey for core from 50.85.169.122 port 49326 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:44.688539 sshd[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:44.694888 systemd-logind[1458]: New session 17 of user core. Apr 13 20:19:44.701960 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:19:45.239381 sshd[5787]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:45.243310 systemd[1]: sshd@21-172.234.25.54:22-50.85.169.122:49326.service: Deactivated successfully. Apr 13 20:19:45.245611 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:19:45.247590 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:19:45.248560 systemd-logind[1458]: Removed session 17. Apr 13 20:19:46.101237 systemd[1]: sshd@8-172.234.25.54:22-91.214.130.133:33612.service: Deactivated successfully. Apr 13 20:19:49.096577 systemd[1]: run-containerd-runc-k8s.io-5bf215d04fa09c1573c2eb1df4b546c88e2a363ca26250963581c4d46c4d3e56-runc.jXukEr.mount: Deactivated successfully. Apr 13 20:19:50.367158 systemd[1]: Started sshd@22-172.234.25.54:22-50.85.169.122:47828.service - OpenSSH per-connection server daemon (50.85.169.122:47828). Apr 13 20:19:51.086289 sshd[5826]: Accepted publickey for core from 50.85.169.122 port 47828 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:51.088084 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:51.093618 systemd-logind[1458]: New session 18 of user core. Apr 13 20:19:51.097992 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:19:51.651286 sshd[5826]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:51.655509 systemd[1]: sshd@22-172.234.25.54:22-50.85.169.122:47828.service: Deactivated successfully. Apr 13 20:19:51.658562 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:19:51.662590 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:19:51.664238 systemd-logind[1458]: Removed session 18. Apr 13 20:19:56.042306 kubelet[2567]: E0413 20:19:56.042165 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:19:56.047356 kubelet[2567]: E0413 20:19:56.046237 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Apr 13 20:19:56.779334 systemd[1]: Started sshd@23-172.234.25.54:22-50.85.169.122:47836.service - OpenSSH per-connection server daemon (50.85.169.122:47836). Apr 13 20:19:57.485877 sshd[5841]: Accepted publickey for core from 50.85.169.122 port 47836 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:19:57.488065 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:57.495785 systemd-logind[1458]: New session 19 of user core. Apr 13 20:19:57.502957 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:19:58.047182 sshd[5841]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:58.056711 systemd[1]: sshd@23-172.234.25.54:22-50.85.169.122:47836.service: Deactivated successfully. Apr 13 20:19:58.058975 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:19:58.059810 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:19:58.060857 systemd-logind[1458]: Removed session 19.