Mar 13 00:39:25.934197 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:39:25.934221 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:39:25.934230 kernel: BIOS-provided physical RAM map: Mar 13 00:39:25.934236 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 13 00:39:25.934242 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 13 00:39:25.934248 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 00:39:25.934258 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 13 00:39:25.934264 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 13 00:39:25.934270 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 00:39:25.934276 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 00:39:25.934282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:39:25.934289 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 00:39:25.934295 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 13 00:39:25.934302 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:39:25.934311 kernel: NX (Execute Disable) protection: active Mar 13 00:39:25.934318 kernel: APIC: Static calls initialized Mar 13 00:39:25.934325 kernel: SMBIOS 2.8 present. Mar 13 00:39:25.934331 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 13 00:39:25.934338 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:39:25.934345 kernel: Hypervisor detected: KVM Mar 13 00:39:25.934353 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 13 00:39:25.934360 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:39:25.934366 kernel: kvm-clock: using sched offset of 7212753250 cycles Mar 13 00:39:25.934373 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:39:25.934380 kernel: tsc: Detected 1999.998 MHz processor Mar 13 00:39:25.934387 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:39:25.934394 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:39:25.934401 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 13 00:39:25.934408 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 00:39:25.934415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:39:25.934423 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 13 00:39:25.934430 kernel: Using GB pages for direct mapping Mar 13 00:39:25.934437 kernel: ACPI: Early table checksum verification disabled Mar 13 00:39:25.934444 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 13 00:39:25.934450 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:25.934457 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:25.934464 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:25.934471 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 13 00:39:25.934477 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:25.934487 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:25.934497 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:25.934504 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:25.934511 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 13 00:39:25.934518 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 13 00:39:25.934527 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 13 00:39:25.934534 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 13 00:39:25.934543 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 13 00:39:25.934550 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 13 00:39:25.934557 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 13 00:39:25.934564 kernel: No NUMA configuration found Mar 13 00:39:25.934572 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 13 00:39:25.934579 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Mar 13 00:39:25.934586 kernel: Zone ranges: Mar 13 00:39:25.934595 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:39:25.934602 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:39:25.934609 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:39:25.934645 kernel: Device empty Mar 13 00:39:25.934653 kernel: Movable zone start for each node Mar 13 00:39:25.934660 kernel: Early memory node ranges Mar 13 00:39:25.934667 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 00:39:25.934674 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 13 00:39:25.934681 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:39:25.934688 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 13 00:39:25.934698 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:39:25.934705 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 00:39:25.934712 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 13 00:39:25.934720 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:39:25.934727 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:39:25.934734 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:39:25.934741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:39:25.934748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:39:25.934755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:39:25.934764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:39:25.934771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:39:25.934778 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:39:25.934785 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:39:25.934792 kernel: TSC deadline timer available Mar 13 00:39:25.934798 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:39:25.934805 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:39:25.934812 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:39:25.934819 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:39:25.934828 kernel: CPU topo: Num. cores per package: 2 Mar 13 00:39:25.934835 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:39:25.934841 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:39:25.934848 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:39:25.934855 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:39:25.934862 kernel: kvm-guest: setup PV sched yield Mar 13 00:39:25.934869 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 00:39:25.934876 kernel: Booting paravirtualized kernel on KVM Mar 13 00:39:25.934883 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:39:25.934892 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:39:25.934899 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:39:25.934906 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:39:25.934913 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:39:25.934919 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:39:25.934926 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:39:25.934934 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:39:25.934942 kernel: random: crng init done Mar 13 00:39:25.934950 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:39:25.934957 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:39:25.934964 kernel: Fallback order for Node 0: 0 Mar 13 00:39:25.934971 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Mar 13 00:39:25.934978 kernel: Policy zone: Normal Mar 13 00:39:25.934985 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:39:25.934992 kernel: software IO TLB: area num 2. Mar 13 00:39:25.934999 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:39:25.935006 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:39:25.935015 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:39:25.935022 kernel: Dynamic Preempt: voluntary Mar 13 00:39:25.935028 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:39:25.935041 kernel: rcu: RCU event tracing is enabled. Mar 13 00:39:25.935048 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:39:25.935055 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:39:25.935062 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:39:25.935069 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:39:25.935076 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:39:25.935083 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:39:25.935092 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:39:25.935106 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:39:25.935115 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:39:25.935123 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:39:25.935130 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:39:25.935137 kernel: Console: colour VGA+ 80x25 Mar 13 00:39:25.935145 kernel: printk: legacy console [tty0] enabled Mar 13 00:39:25.935152 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:39:25.935159 kernel: ACPI: Core revision 20240827 Mar 13 00:39:25.935169 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:39:25.935176 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:39:25.935183 kernel: x2apic enabled Mar 13 00:39:25.935190 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:39:25.935198 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:39:25.935205 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:39:25.935212 kernel: kvm-guest: setup PV IPIs Mar 13 00:39:25.935222 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:39:25.935229 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Mar 13 00:39:25.935236 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Mar 13 00:39:25.935244 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:39:25.935251 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:39:25.935258 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:39:25.935266 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:39:25.935273 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:39:25.935280 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:39:25.935289 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 13 00:39:25.935297 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 00:39:25.935304 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 00:39:25.935312 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:39:25.935319 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:39:25.935327 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:39:25.935334 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:39:25.935341 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:39:25.935351 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:39:25.935358 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:39:25.935365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:39:25.935373 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:39:25.935380 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 13 00:39:25.935387 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:39:25.935394 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 13 00:39:25.935402 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 13 00:39:25.935409 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:39:25.935419 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:39:25.935426 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:39:25.935433 kernel: landlock: Up and running. Mar 13 00:39:25.935441 kernel: SELinux: Initializing. Mar 13 00:39:25.935448 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:39:25.935455 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:39:25.935463 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:39:25.935470 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 13 00:39:25.935477 kernel: ... version: 0 Mar 13 00:39:25.935487 kernel: ... bit width: 48 Mar 13 00:39:25.935494 kernel: ... generic registers: 6 Mar 13 00:39:25.935501 kernel: ... value mask: 0000ffffffffffff Mar 13 00:39:25.935508 kernel: ... max period: 00007fffffffffff Mar 13 00:39:25.935516 kernel: ... fixed-purpose events: 0 Mar 13 00:39:25.935523 kernel: ... event mask: 000000000000003f Mar 13 00:39:25.935530 kernel: signal: max sigframe size: 3376 Mar 13 00:39:25.935537 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:39:25.935545 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:39:25.935554 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:39:25.935562 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:39:25.935569 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:39:25.935576 kernel: .... node #0, CPUs: #1 Mar 13 00:39:25.935584 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:39:25.935591 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 13 00:39:25.935600 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 235480K reserved, 0K cma-reserved) Mar 13 00:39:25.935607 kernel: devtmpfs: initialized Mar 13 00:39:25.935657 kernel: x86/mm: Memory block size: 128MB Mar 13 00:39:25.935668 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:39:25.935675 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:39:25.935683 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:39:25.935691 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:39:25.935698 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:39:25.935705 kernel: audit: type=2000 audit(1773362362.791:1): state=initialized audit_enabled=0 res=1 Mar 13 00:39:25.935713 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:39:25.935720 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:39:25.935727 kernel: cpuidle: using governor menu Mar 13 00:39:25.935736 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:39:25.935744 kernel: dca service started, version 1.12.1 Mar 13 00:39:25.935751 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 00:39:25.935758 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 00:39:25.935765 kernel: PCI: Using configuration type 1 for base access Mar 13 00:39:25.935772 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:39:25.935779 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:39:25.935786 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:39:25.935793 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:39:25.935802 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:39:25.935809 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:39:25.935816 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:39:25.935823 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:39:25.935830 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:39:25.935837 kernel: ACPI: Interpreter enabled Mar 13 00:39:25.935844 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:39:25.935851 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:39:25.935858 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:39:25.935868 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:39:25.935874 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:39:25.935881 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:39:25.936063 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:39:25.936193 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:39:25.936316 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:39:25.936326 kernel: PCI host bridge to bus 0000:00 Mar 13 00:39:25.936456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:39:25.936570 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:39:25.943785 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:39:25.943910 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 13 00:39:25.944023 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 00:39:25.944135 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 13 00:39:25.944246 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:39:25.944397 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:39:25.944535 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:39:25.944898 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 13 00:39:25.945031 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 13 00:39:25.945152 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 13 00:39:25.945271 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:39:25.945402 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:39:25.945530 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Mar 13 00:39:25.947735 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 13 00:39:25.947871 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 13 00:39:25.948005 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:39:25.948129 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Mar 13 00:39:25.948251 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 13 00:39:25.948376 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 13 00:39:25.948497 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 13 00:39:25.948648 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:39:25.948775 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:39:25.948903 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:39:25.949024 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Mar 13 00:39:25.949142 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Mar 13 00:39:25.949274 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:39:25.949394 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 00:39:25.949404 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:39:25.949412 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:39:25.949419 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:39:25.949427 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:39:25.949434 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:39:25.949441 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:39:25.949451 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:39:25.949458 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:39:25.949465 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:39:25.949472 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:39:25.949479 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:39:25.949486 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:39:25.949493 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:39:25.949500 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:39:25.949507 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:39:25.949516 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:39:25.949523 kernel: iommu: Default domain type: Translated Mar 13 00:39:25.949531 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:39:25.949538 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:39:25.949545 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:39:25.949552 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 13 00:39:25.949559 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 13 00:39:25.951488 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:39:25.952670 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:39:25.952809 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:39:25.952820 kernel: vgaarb: loaded Mar 13 00:39:25.952828 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:39:25.952836 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:39:25.952843 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:39:25.952851 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:39:25.952858 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:39:25.952866 kernel: pnp: PnP ACPI init Mar 13 00:39:25.953006 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 00:39:25.953017 kernel: pnp: PnP ACPI: found 5 devices Mar 13 00:39:25.953025 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:39:25.953032 kernel: NET: Registered PF_INET protocol family Mar 13 00:39:25.953039 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:39:25.953047 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:39:25.953054 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:39:25.953061 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:39:25.953072 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:39:25.953079 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:39:25.953086 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:39:25.953094 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:39:25.953101 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:39:25.953109 kernel: NET: Registered PF_XDP protocol family Mar 13 00:39:25.953222 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:39:25.953333 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:39:25.953459 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:39:25.955292 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 13 00:39:25.955426 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 00:39:25.955540 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 13 00:39:25.955550 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:39:25.955557 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:39:25.955565 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 13 00:39:25.955572 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Mar 13 00:39:25.955580 kernel: Initialise system trusted keyrings Mar 13 00:39:25.955591 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:39:25.955598 kernel: Key type asymmetric registered Mar 13 00:39:25.955605 kernel: Asymmetric key parser 'x509' registered Mar 13 00:39:25.955637 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:39:25.955645 kernel: io scheduler mq-deadline registered Mar 13 00:39:25.955652 kernel: io scheduler kyber registered Mar 13 00:39:25.955659 kernel: io scheduler bfq registered Mar 13 00:39:25.955852 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:39:25.955860 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:39:25.955871 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:39:25.955878 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:39:25.955885 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:39:25.955892 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:39:25.955899 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:39:25.955906 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:39:25.955914 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:39:25.956052 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 13 00:39:25.956170 kernel: rtc_cmos 00:03: registered as rtc0 Mar 13 00:39:25.956289 kernel: rtc_cmos 00:03: setting system clock to 2026-03-13T00:39:25 UTC (1773362365) Mar 13 00:39:25.956402 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:39:25.956412 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:39:25.956419 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:39:25.956426 kernel: Segment Routing with IPv6 Mar 13 00:39:25.956433 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:39:25.956441 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:39:25.956448 kernel: Key type dns_resolver registered Mar 13 00:39:25.956458 kernel: IPI shorthand broadcast: enabled Mar 13 00:39:25.956465 kernel: sched_clock: Marking stable (2855006350, 332863893)->(3280820530, -92950287) Mar 13 00:39:25.956487 kernel: registered taskstats version 1 Mar 13 00:39:25.956513 kernel: Loading compiled-in X.509 certificates Mar 13 00:39:25.956749 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:39:25.956758 kernel: Demotion targets for Node 0: null Mar 13 00:39:25.956765 kernel: Key type .fscrypt registered Mar 13 00:39:25.956773 kernel: Key type fscrypt-provisioning registered Mar 13 00:39:25.956780 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:39:25.956790 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:39:25.956797 kernel: ima: No architecture policies found Mar 13 00:39:25.956804 kernel: clk: Disabling unused clocks Mar 13 00:39:25.956811 kernel: Warning: unable to open an initial console. Mar 13 00:39:25.956819 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:39:25.956826 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:39:25.956833 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:39:25.956840 kernel: Run /init as init process Mar 13 00:39:25.956847 kernel: with arguments: Mar 13 00:39:25.956856 kernel: /init Mar 13 00:39:25.956863 kernel: with environment: Mar 13 00:39:25.956884 kernel: HOME=/ Mar 13 00:39:25.956894 kernel: TERM=linux Mar 13 00:39:25.956903 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:39:25.956913 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:39:25.956921 systemd[1]: Detected virtualization kvm. Mar 13 00:39:25.956930 systemd[1]: Detected architecture x86-64. Mar 13 00:39:25.956938 systemd[1]: Running in initrd. Mar 13 00:39:25.956945 systemd[1]: No hostname configured, using default hostname. Mar 13 00:39:25.956953 systemd[1]: Hostname set to . Mar 13 00:39:25.956961 systemd[1]: Initializing machine ID from random generator. Mar 13 00:39:25.956969 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:39:25.956976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:39:25.956984 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:39:25.956995 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:39:25.957002 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:39:25.957011 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:39:25.957019 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:39:25.957044 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:39:25.957067 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:39:25.957096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:39:25.957132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:39:25.957160 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:39:25.957182 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:39:25.957209 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:39:25.957236 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:39:25.957264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:39:25.957291 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:39:25.957313 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:39:25.957340 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:39:25.957376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:39:25.957406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:39:25.957440 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:39:25.957468 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:39:25.957491 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:39:25.957525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:39:25.957552 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:39:25.957579 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:39:25.958887 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:39:25.958902 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:39:25.958911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:39:25.958919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:25.958951 systemd-journald[187]: Collecting audit messages is disabled. Mar 13 00:39:25.958972 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:39:25.958982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:39:25.958992 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:39:25.959000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:39:25.959009 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:39:25.959018 systemd-journald[187]: Journal started Mar 13 00:39:25.959035 systemd-journald[187]: Runtime Journal (/run/log/journal/2b03c4e3118b4a869b6a61265ddff2c3) is 8M, max 78.2M, 70.2M free. Mar 13 00:39:25.927685 systemd-modules-load[188]: Inserted module 'overlay' Mar 13 00:39:25.993190 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:39:25.993218 kernel: Bridge firewalling registered Mar 13 00:39:25.989812 systemd-modules-load[188]: Inserted module 'br_netfilter' Mar 13 00:39:26.079696 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:39:26.080828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:26.082022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:39:26.087212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:39:26.089922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:39:26.103423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:39:26.107814 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:39:26.114333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:39:26.127698 systemd-tmpfiles[209]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:39:26.130980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:39:26.134262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:39:26.136105 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:39:26.138544 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:39:26.143720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:39:26.161168 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:39:26.185134 systemd-resolved[226]: Positive Trust Anchors: Mar 13 00:39:26.185147 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:39:26.185173 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:39:26.189966 systemd-resolved[226]: Defaulting to hostname 'linux'. Mar 13 00:39:26.193591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:39:26.195004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:39:26.264663 kernel: SCSI subsystem initialized Mar 13 00:39:26.273686 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:39:26.284646 kernel: iscsi: registered transport (tcp) Mar 13 00:39:26.306825 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:39:26.306856 kernel: QLogic iSCSI HBA Driver Mar 13 00:39:26.330475 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:39:26.352033 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:39:26.356241 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:39:26.414231 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:39:26.417184 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:39:26.470642 kernel: raid6: avx2x4 gen() 35980 MB/s Mar 13 00:39:26.488642 kernel: raid6: avx2x2 gen() 30506 MB/s Mar 13 00:39:26.506700 kernel: raid6: avx2x1 gen() 21933 MB/s Mar 13 00:39:26.506719 kernel: raid6: using algorithm avx2x4 gen() 35980 MB/s Mar 13 00:39:26.526893 kernel: raid6: .... xor() 4511 MB/s, rmw enabled Mar 13 00:39:26.526911 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:39:26.549655 kernel: xor: automatically using best checksumming function avx Mar 13 00:39:26.690667 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:39:26.699977 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:39:26.702730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:39:26.736512 systemd-udevd[435]: Using default interface naming scheme 'v255'. Mar 13 00:39:26.743032 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:39:26.746269 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:39:26.781800 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Mar 13 00:39:26.813487 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:39:26.816612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:39:26.888827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:39:26.891775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:39:26.968642 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:39:26.981919 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Mar 13 00:39:26.986633 kernel: libata version 3.00 loaded. Mar 13 00:39:27.194807 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:39:27.194865 kernel: scsi host0: Virtio SCSI HBA Mar 13 00:39:27.200858 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:39:27.201057 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:39:27.206638 kernel: AES CTR mode by8 optimization enabled Mar 13 00:39:27.239077 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:39:27.239325 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:39:27.239473 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:39:27.245307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:39:27.282414 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 13 00:39:27.282467 kernel: scsi host1: ahci Mar 13 00:39:27.282705 kernel: scsi host2: ahci Mar 13 00:39:27.282861 kernel: scsi host3: ahci Mar 13 00:39:27.271729 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:27.280552 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:27.283838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:27.285406 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:39:27.295094 kernel: scsi host4: ahci Mar 13 00:39:27.295278 kernel: scsi host5: ahci Mar 13 00:39:27.296726 kernel: scsi host6: ahci Mar 13 00:39:27.298997 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Mar 13 00:39:27.302645 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Mar 13 00:39:27.306189 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Mar 13 00:39:27.312385 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Mar 13 00:39:27.312418 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Mar 13 00:39:27.316265 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Mar 13 00:39:27.321775 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 13 00:39:27.326681 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 13 00:39:27.327078 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 00:39:27.328650 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 13 00:39:27.328830 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 13 00:39:27.340857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:39:27.340890 kernel: GPT:9289727 != 167739391 Mar 13 00:39:27.344274 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:39:27.344296 kernel: GPT:9289727 != 167739391 Mar 13 00:39:27.369246 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:39:27.369289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:39:27.369302 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 00:39:27.458266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:27.624652 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:27.632643 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:27.642740 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:27.642766 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:27.647640 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:27.647662 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:27.701064 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 13 00:39:27.702303 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 13 00:39:27.711138 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 13 00:39:27.731725 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 13 00:39:27.732758 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:39:27.743843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:39:27.746297 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:39:27.747087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:39:27.748765 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:39:27.751976 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:39:27.755871 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:39:27.765186 disk-uuid[612]: Primary Header is updated. Mar 13 00:39:27.765186 disk-uuid[612]: Secondary Entries is updated. Mar 13 00:39:27.765186 disk-uuid[612]: Secondary Header is updated. Mar 13 00:39:27.775971 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:39:27.779815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:39:27.787657 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:39:28.793676 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:39:28.793728 disk-uuid[613]: The operation has completed successfully. Mar 13 00:39:28.843161 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:39:28.843282 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:39:28.874126 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:39:28.891758 sh[634]: Success Mar 13 00:39:28.911716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:39:28.911753 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:39:28.912878 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:39:28.925644 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:39:28.968123 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:39:28.971819 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:39:28.995144 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:39:29.008106 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (646) Mar 13 00:39:29.008143 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:39:29.014212 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:29.024191 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:39:29.024215 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:39:29.029184 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:39:29.031241 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:39:29.033328 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:39:29.035071 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:39:29.036831 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:39:29.039895 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:39:29.077641 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (681) Mar 13 00:39:29.082496 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:29.082534 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:29.092008 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:39:29.092041 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:39:29.092054 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:39:29.102684 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:29.104526 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:39:29.108797 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:39:29.173178 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:39:29.177197 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:39:29.220596 ignition[747]: Ignition 2.22.0 Mar 13 00:39:29.220830 ignition[747]: Stage: fetch-offline Mar 13 00:39:29.220860 ignition[747]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:29.220870 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:39:29.225930 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:39:29.220941 ignition[747]: parsed url from cmdline: "" Mar 13 00:39:29.220945 ignition[747]: no config URL provided Mar 13 00:39:29.220950 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:39:29.220958 ignition[747]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:39:29.220963 ignition[747]: failed to fetch config: resource requires networking Mar 13 00:39:29.221088 ignition[747]: Ignition finished successfully Mar 13 00:39:29.235072 systemd-networkd[815]: lo: Link UP Mar 13 00:39:29.235085 systemd-networkd[815]: lo: Gained carrier Mar 13 00:39:29.236670 systemd-networkd[815]: Enumeration completed Mar 13 00:39:29.236747 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:39:29.237413 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:29.237418 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:39:29.238557 systemd[1]: Reached target network.target - Network. Mar 13 00:39:29.239055 systemd-networkd[815]: eth0: Link UP Mar 13 00:39:29.239226 systemd-networkd[815]: eth0: Gained carrier Mar 13 00:39:29.239235 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:29.243731 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:39:29.271842 ignition[824]: Ignition 2.22.0 Mar 13 00:39:29.271855 ignition[824]: Stage: fetch Mar 13 00:39:29.271973 ignition[824]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:29.271984 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:39:29.272223 ignition[824]: parsed url from cmdline: "" Mar 13 00:39:29.272228 ignition[824]: no config URL provided Mar 13 00:39:29.272234 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:39:29.272243 ignition[824]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:39:29.272276 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 13 00:39:29.272469 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:39:29.472658 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 13 00:39:29.472835 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:39:29.873399 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 13 00:39:29.873576 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:39:29.948673 systemd-networkd[815]: eth0: DHCPv4 address 172.236.108.24/24, gateway 172.236.108.1 acquired from 23.40.197.105 Mar 13 00:39:30.674282 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 13 00:39:30.679764 systemd-networkd[815]: eth0: Gained IPv6LL Mar 13 00:39:30.767107 ignition[824]: PUT result: OK Mar 13 00:39:30.767177 ignition[824]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 13 00:39:30.882668 ignition[824]: GET result: OK Mar 13 00:39:30.882803 ignition[824]: parsing config with SHA512: 7389fed40515fa34cca1df8ce56b8ac03f64c3271ddebdf6c6ffab95cbfd3e4a97c60707a5e8619c7a6eb935282a38896a9ebe5f01ad0b6862275060aade6930 Mar 13 00:39:30.888479 unknown[824]: fetched base config from "system" Mar 13 00:39:30.888837 ignition[824]: fetch: fetch complete Mar 13 00:39:30.888490 unknown[824]: fetched base config from "system" Mar 13 00:39:30.888843 ignition[824]: fetch: fetch passed Mar 13 00:39:30.888496 unknown[824]: fetched user config from "akamai" Mar 13 00:39:30.888888 ignition[824]: Ignition finished successfully Mar 13 00:39:30.902257 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:39:30.907731 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:39:30.936602 ignition[831]: Ignition 2.22.0 Mar 13 00:39:30.936638 ignition[831]: Stage: kargs Mar 13 00:39:30.936762 ignition[831]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:30.936773 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:39:30.937374 ignition[831]: kargs: kargs passed Mar 13 00:39:30.937415 ignition[831]: Ignition finished successfully Mar 13 00:39:30.942066 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:39:30.945510 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:39:30.968120 ignition[838]: Ignition 2.22.0 Mar 13 00:39:30.968135 ignition[838]: Stage: disks Mar 13 00:39:30.968238 ignition[838]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:30.968248 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:39:30.969104 ignition[838]: disks: disks passed Mar 13 00:39:30.971514 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:39:30.969143 ignition[838]: Ignition finished successfully Mar 13 00:39:30.973262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:39:30.974539 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:39:30.975922 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:39:30.977476 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:39:30.979070 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:39:30.981359 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:39:31.018842 systemd-fsck[846]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:39:31.022192 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:39:31.024894 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:39:31.130644 kernel: EXT4-fs (sda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:39:31.131375 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:39:31.132474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:39:31.134656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:39:31.137699 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:39:31.139419 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:39:31.140754 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:39:31.141669 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:39:31.146328 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:39:31.149377 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:39:31.155641 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (854) Mar 13 00:39:31.161636 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:31.161660 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:31.167997 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:39:31.168021 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:39:31.172210 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:39:31.173882 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:39:31.207318 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:39:31.212635 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:39:31.218498 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:39:31.222704 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:39:31.307258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:39:31.309369 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:39:31.311578 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:39:31.331072 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:39:31.335117 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:31.349587 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:39:31.359855 ignition[969]: INFO : Ignition 2.22.0 Mar 13 00:39:31.359855 ignition[969]: INFO : Stage: mount Mar 13 00:39:31.362088 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:31.362088 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:39:31.362088 ignition[969]: INFO : mount: mount passed Mar 13 00:39:31.362088 ignition[969]: INFO : Ignition finished successfully Mar 13 00:39:31.362474 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:39:31.365716 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:39:32.134227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:39:32.164647 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (980) Mar 13 00:39:32.164698 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:32.169737 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:32.174797 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:39:32.174852 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:39:32.176923 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:39:32.181581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:39:32.218333 ignition[996]: INFO : Ignition 2.22.0 Mar 13 00:39:32.218333 ignition[996]: INFO : Stage: files Mar 13 00:39:32.220342 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:32.220342 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:39:32.220342 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:39:32.220342 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:39:32.220342 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:39:32.225513 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:39:32.225513 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:39:32.225513 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:39:32.225513 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:39:32.225513 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:39:32.222947 unknown[996]: wrote ssh authorized keys file for user: core Mar 13 00:39:32.442372 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:39:32.498257 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:39:32.499731 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:39:32.508284 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:39:32.508284 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:39:32.508284 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:39:32.508284 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:39:32.508284 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:39:32.508284 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:39:33.018054 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:39:34.011914 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:39:34.011914 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:39:34.014695 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:39:34.016203 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:39:34.016203 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:39:34.016203 ignition[996]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:39:34.016203 ignition[996]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:39:34.022066 ignition[996]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:39:34.022066 ignition[996]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:39:34.022066 ignition[996]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:39:34.022066 ignition[996]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:39:34.022066 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:39:34.022066 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:39:34.022066 ignition[996]: INFO : files: files passed Mar 13 00:39:34.022066 ignition[996]: INFO : Ignition finished successfully Mar 13 00:39:34.019652 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:39:34.021755 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:39:34.024680 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:39:34.038066 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:39:34.038167 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:39:34.045575 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:39:34.046871 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:39:34.048000 initrd-setup-root-after-ignition[1027]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:39:34.048794 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:39:34.050043 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:39:34.052155 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:39:34.095826 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:39:34.095939 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:39:34.096939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:39:34.098164 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:39:34.099855 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:39:34.100530 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:39:34.125782 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:39:34.127815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:39:34.145812 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:39:34.147532 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:39:34.149284 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:39:34.150080 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:39:34.150176 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:39:34.151955 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:39:34.152992 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:39:34.154568 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:39:34.155942 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:39:34.157436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:39:34.159058 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:39:34.160689 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:39:34.162320 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:39:34.163984 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:39:34.165545 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:39:34.167178 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:39:34.168692 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:39:34.168847 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:39:34.170546 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:39:34.171587 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:39:34.173026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:39:34.173692 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:39:34.174560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:39:34.174679 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:39:34.176783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:39:34.176892 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:39:34.177924 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:39:34.178055 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:39:34.180710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:39:34.181556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:39:34.183735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:39:34.186230 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:39:34.187332 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:39:34.187482 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:39:34.188382 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:39:34.188516 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:39:34.198965 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:39:34.221705 ignition[1051]: INFO : Ignition 2.22.0 Mar 13 00:39:34.221705 ignition[1051]: INFO : Stage: umount Mar 13 00:39:34.221705 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:34.221705 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:39:34.221705 ignition[1051]: INFO : umount: umount passed Mar 13 00:39:34.221705 ignition[1051]: INFO : Ignition finished successfully Mar 13 00:39:34.218962 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:39:34.220667 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:39:34.220965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:39:34.225056 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:39:34.225145 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:39:34.226550 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:39:34.226603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:39:34.228062 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:39:34.228310 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:39:34.230229 systemd[1]: Stopped target network.target - Network. Mar 13 00:39:34.230916 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:39:34.230972 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:39:34.231779 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:39:34.233811 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:39:34.235030 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:39:34.236425 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:39:34.237989 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:39:34.239875 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:39:34.239927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:39:34.242848 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:39:34.242904 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:39:34.244206 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:39:34.244260 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:39:34.245046 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:39:34.245094 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:39:34.247667 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:39:34.249323 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:39:34.253595 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:39:34.256021 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:39:34.257314 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:39:34.260347 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:39:34.260583 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:39:34.260833 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:39:34.262860 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:39:34.263151 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:39:34.263270 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:39:34.266126 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:39:34.267424 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:39:34.267467 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:39:34.269052 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:39:34.269107 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:39:34.271125 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:39:34.273084 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:39:34.273140 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:39:34.274715 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:39:34.274765 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:39:34.277790 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:39:34.277841 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:39:34.278813 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:39:34.278861 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:39:34.280750 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:39:34.286072 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:39:34.286138 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:39:34.300123 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:39:34.300306 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:39:34.301857 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:39:34.301941 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:39:34.303378 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:39:34.303417 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:39:34.305035 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:39:34.305085 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:39:34.307305 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:39:34.307355 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:39:34.308706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:39:34.308760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:39:34.311734 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:39:34.314092 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:39:34.314149 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:39:34.315401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:39:34.315449 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:39:34.317787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:39:34.317834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:34.322608 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:39:34.322694 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:39:34.322744 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:39:34.323120 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:39:34.323221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:39:34.325462 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:39:34.325824 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:39:34.327400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:39:34.329322 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:39:34.344819 systemd[1]: Switching root. Mar 13 00:39:34.404559 systemd-journald[187]: Journal stopped Mar 13 00:39:35.609002 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Mar 13 00:39:35.609033 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:39:35.609045 kernel: SELinux: policy capability open_perms=1 Mar 13 00:39:35.609055 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:39:35.609064 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:39:35.609075 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:39:35.609085 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:39:35.609095 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:39:35.609104 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:39:35.609113 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:39:35.609122 kernel: audit: type=1403 audit(1773362374.537:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:39:35.609132 systemd[1]: Successfully loaded SELinux policy in 75.290ms. Mar 13 00:39:35.609145 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.726ms. Mar 13 00:39:35.609156 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:39:35.609167 systemd[1]: Detected virtualization kvm. Mar 13 00:39:35.609177 systemd[1]: Detected architecture x86-64. Mar 13 00:39:35.609189 systemd[1]: Detected first boot. Mar 13 00:39:35.609199 systemd[1]: Initializing machine ID from random generator. Mar 13 00:39:35.609209 zram_generator::config[1096]: No configuration found. Mar 13 00:39:35.609221 kernel: Guest personality initialized and is inactive Mar 13 00:39:35.609230 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:39:35.609240 kernel: Initialized host personality Mar 13 00:39:35.609249 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:39:35.609259 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:39:35.609272 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:39:35.609281 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:39:35.609291 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:39:35.609301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:39:35.609311 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:39:35.609321 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:39:35.609332 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:39:35.609344 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:39:35.609354 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:39:35.609364 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:39:35.609374 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:39:35.609384 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:39:35.609394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:39:35.609405 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:39:35.609415 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:39:35.609427 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:39:35.609441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:39:35.609452 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:39:35.609462 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:39:35.609472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:39:35.609483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:39:35.609493 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:39:35.609505 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:39:35.609516 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:39:35.609526 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:39:35.609536 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:39:35.609546 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:39:35.609556 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:39:35.609567 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:39:35.609577 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:39:35.609587 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:39:35.609604 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:39:35.609639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:39:35.609655 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:39:35.609669 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:39:35.609689 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:39:35.609704 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:39:35.609722 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:39:35.609736 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:39:35.609752 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:35.609767 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:39:35.609782 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:39:35.609797 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:39:35.609816 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:39:35.609830 systemd[1]: Reached target machines.target - Containers. Mar 13 00:39:35.609846 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:39:35.609863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:39:35.609881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:39:35.609897 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:39:35.609914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:39:35.609924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:39:35.609935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:39:35.609948 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:39:35.609959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:39:35.609969 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:39:35.609979 kernel: ACPI: bus type drm_connector registered Mar 13 00:39:35.609989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:39:35.610005 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:39:35.610024 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:39:35.610040 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:39:35.610055 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:39:35.610065 kernel: loop: module loaded Mar 13 00:39:35.610075 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:39:35.610085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:39:35.610095 kernel: fuse: init (API version 7.41) Mar 13 00:39:35.610105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:39:35.610115 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:39:35.610126 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:39:35.610138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:39:35.610173 systemd-journald[1184]: Collecting audit messages is disabled. Mar 13 00:39:35.610194 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:39:35.610205 systemd[1]: Stopped verity-setup.service. Mar 13 00:39:35.610218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:35.610229 systemd-journald[1184]: Journal started Mar 13 00:39:35.610248 systemd-journald[1184]: Runtime Journal (/run/log/journal/980707c6b10a4aaaa73e7781ec8200a5) is 8M, max 78.2M, 70.2M free. Mar 13 00:39:35.210408 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:39:35.235992 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 00:39:35.236577 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:39:35.621672 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:39:35.623126 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:39:35.624007 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:39:35.625269 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:39:35.626201 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:39:35.627087 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:39:35.627992 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:39:35.629059 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:39:35.630270 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:39:35.631402 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:39:35.631716 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:39:35.635055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:39:35.635339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:39:35.636444 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:39:35.636775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:39:35.637990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:39:35.638268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:39:35.639654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:39:35.639933 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:39:35.641075 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:39:35.641371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:39:35.642760 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:39:35.643852 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:39:35.645109 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:39:35.646221 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:39:35.658115 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:39:35.662705 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:39:35.667432 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:39:35.669071 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:39:35.669154 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:39:35.673502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:39:35.677736 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:39:35.679974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:39:35.686008 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:39:35.692120 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:39:35.694094 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:39:35.695721 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:39:35.698897 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:39:35.712521 systemd-journald[1184]: Time spent on flushing to /var/log/journal/980707c6b10a4aaaa73e7781ec8200a5 is 29.880ms for 1002 entries. Mar 13 00:39:35.712521 systemd-journald[1184]: System Journal (/var/log/journal/980707c6b10a4aaaa73e7781ec8200a5) is 8M, max 195.6M, 187.6M free. Mar 13 00:39:35.758596 systemd-journald[1184]: Received client request to flush runtime journal. Mar 13 00:39:35.701223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:39:35.713887 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:39:35.718326 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:39:35.723682 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:39:35.725065 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:39:35.740974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:39:35.742080 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:39:35.744672 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:39:35.763666 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:39:35.775656 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:39:35.801541 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:39:35.805899 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:39:35.820308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:39:35.839639 kernel: loop1: detected capacity change from 0 to 8 Mar 13 00:39:35.834433 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:39:35.840276 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:39:35.869965 kernel: loop2: detected capacity change from 0 to 219192 Mar 13 00:39:35.868215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:39:35.899983 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 13 00:39:35.900317 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 13 00:39:35.905844 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:39:35.921807 kernel: loop3: detected capacity change from 0 to 128560 Mar 13 00:39:35.948649 kernel: loop4: detected capacity change from 0 to 110984 Mar 13 00:39:35.970664 kernel: loop5: detected capacity change from 0 to 8 Mar 13 00:39:35.975645 kernel: loop6: detected capacity change from 0 to 219192 Mar 13 00:39:35.994854 kernel: loop7: detected capacity change from 0 to 128560 Mar 13 00:39:36.014721 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 13 00:39:36.015407 (sd-merge)[1247]: Merged extensions into '/usr'. Mar 13 00:39:36.024686 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:39:36.024779 systemd[1]: Reloading... Mar 13 00:39:36.146648 zram_generator::config[1273]: No configuration found. Mar 13 00:39:36.254097 ldconfig[1216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:39:36.353717 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:39:36.354311 systemd[1]: Reloading finished in 329 ms. Mar 13 00:39:36.388025 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:39:36.389184 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:39:36.390456 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:39:36.403090 systemd[1]: Starting ensure-sysext.service... Mar 13 00:39:36.407724 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:39:36.413727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:39:36.425150 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:39:36.425245 systemd[1]: Reloading... Mar 13 00:39:36.432238 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:39:36.432279 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:39:36.432562 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:39:36.432841 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:39:36.433767 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:39:36.434016 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Mar 13 00:39:36.434092 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Mar 13 00:39:36.439996 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:39:36.440010 systemd-tmpfiles[1318]: Skipping /boot Mar 13 00:39:36.462769 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:39:36.462971 systemd-tmpfiles[1318]: Skipping /boot Mar 13 00:39:36.466143 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Mar 13 00:39:36.528972 zram_generator::config[1345]: No configuration found. Mar 13 00:39:36.764653 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:39:36.766320 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:39:36.766936 systemd[1]: Reloading finished in 340 ms. Mar 13 00:39:36.768644 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:39:36.779013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:39:36.781190 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:39:36.809809 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:39:36.819642 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:39:36.816421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:39:36.820508 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:39:36.829797 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:39:36.837249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:39:36.845987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:39:36.847747 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:39:36.848009 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:39:36.859562 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:36.860826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:39:36.864886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:39:36.873767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:39:36.875945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:39:36.877659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:39:36.877928 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:39:36.878023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:36.885424 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:39:36.890929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:36.891099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:39:36.891259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:39:36.891369 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:39:36.891450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:36.896577 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:36.897271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:39:36.901244 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:39:36.902444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:39:36.902542 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:39:36.902684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:36.904048 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:39:36.905353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:39:36.905889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:39:36.916545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:39:36.920882 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:39:36.923468 systemd[1]: Finished ensure-sysext.service. Mar 13 00:39:36.939706 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:39:36.948071 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:39:36.975929 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:39:36.976162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:39:36.981692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:39:36.982330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:39:36.983205 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:39:36.985506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:39:37.033274 augenrules[1483]: No rules Mar 13 00:39:37.047708 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:39:37.046909 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:39:37.047176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:39:37.050011 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:39:37.051112 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:39:37.051936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:39:37.053456 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:39:37.075794 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:39:37.101971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:37.129999 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:39:37.133754 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:39:37.181684 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:39:37.335387 systemd-networkd[1428]: lo: Link UP Mar 13 00:39:37.335400 systemd-networkd[1428]: lo: Gained carrier Mar 13 00:39:37.340490 systemd-networkd[1428]: Enumeration completed Mar 13 00:39:37.341786 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:39:37.342146 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:37.342152 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:39:37.346252 systemd-networkd[1428]: eth0: Link UP Mar 13 00:39:37.346494 systemd-networkd[1428]: eth0: Gained carrier Mar 13 00:39:37.346545 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:37.375879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:37.378755 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:39:37.382206 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:39:37.401243 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:39:37.402550 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:39:37.406547 systemd-resolved[1429]: Positive Trust Anchors: Mar 13 00:39:37.406840 systemd-resolved[1429]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:39:37.406923 systemd-resolved[1429]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:39:37.411394 systemd-resolved[1429]: Defaulting to hostname 'linux'. Mar 13 00:39:37.414106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:39:37.415733 systemd[1]: Reached target network.target - Network. Mar 13 00:39:37.416473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:39:37.417246 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:39:37.418091 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:39:37.420710 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:39:37.421485 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:39:37.422630 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:39:37.423527 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:39:37.424425 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:39:37.425190 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:39:37.425217 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:39:37.425916 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:39:37.427386 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:39:37.430357 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:39:37.433649 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:39:37.434677 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:39:37.435491 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:39:37.440138 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:39:37.441460 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:39:37.443848 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:39:37.445013 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:39:37.448001 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:39:37.448733 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:39:37.449574 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:39:37.449651 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:39:37.451411 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:39:37.455744 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:39:37.466885 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:39:37.469805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:39:37.472955 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:39:37.477799 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:39:37.479688 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:39:37.483819 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:39:37.490016 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:39:37.521948 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:39:37.533461 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:39:37.539180 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:39:37.545970 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing passwd entry cache Mar 13 00:39:37.546230 oslogin_cache_refresh[1516]: Refreshing passwd entry cache Mar 13 00:39:37.555047 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:39:37.555283 oslogin_cache_refresh[1516]: Failure getting users, quitting Mar 13 00:39:37.555968 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting users, quitting Mar 13 00:39:37.555968 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:39:37.555968 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing group entry cache Mar 13 00:39:37.555968 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting groups, quitting Mar 13 00:39:37.555968 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:39:37.555299 oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:39:37.555342 oslogin_cache_refresh[1516]: Refreshing group entry cache Mar 13 00:39:37.555814 oslogin_cache_refresh[1516]: Failure getting groups, quitting Mar 13 00:39:37.555823 oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:39:37.558935 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:39:37.559378 jq[1514]: false Mar 13 00:39:37.559486 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:39:37.564834 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:39:37.570839 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:39:37.574921 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:39:37.577253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:39:37.577598 coreos-metadata[1511]: Mar 13 00:39:37.577 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 13 00:39:37.578268 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:39:37.578887 extend-filesystems[1515]: Found /dev/sda6 Mar 13 00:39:37.579003 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:39:37.579338 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:39:37.587656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:39:37.589841 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:39:37.611393 jq[1529]: true Mar 13 00:39:37.623335 extend-filesystems[1515]: Found /dev/sda9 Mar 13 00:39:37.628235 update_engine[1525]: I20260313 00:39:37.628171 1525 main.cc:92] Flatcar Update Engine starting Mar 13 00:39:37.637769 extend-filesystems[1515]: Checking size of /dev/sda9 Mar 13 00:39:37.650274 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:39:37.653039 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:39:37.660324 jq[1548]: true Mar 13 00:39:37.664521 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:39:37.675409 extend-filesystems[1515]: Resized partition /dev/sda9 Mar 13 00:39:37.677524 tar[1534]: linux-amd64/LICENSE Mar 13 00:39:37.677524 tar[1534]: linux-amd64/helm Mar 13 00:39:37.678692 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:39:37.688644 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 13 00:39:37.703247 dbus-daemon[1512]: [system] SELinux support is enabled Mar 13 00:39:37.707177 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:39:37.712486 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:39:37.712515 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:39:37.714144 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:39:37.714165 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:39:37.726542 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:39:37.728894 update_engine[1525]: I20260313 00:39:37.728751 1525 update_check_scheduler.cc:74] Next update check in 9m30s Mar 13 00:39:37.739866 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:39:37.789417 systemd-logind[1523]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:39:37.789456 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:39:37.799856 systemd-logind[1523]: New seat seat0. Mar 13 00:39:37.801548 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:39:37.873637 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:39:37.875572 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:39:37.881978 systemd[1]: Starting sshkeys.service... Mar 13 00:39:37.936053 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:39:37.941757 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:39:38.041712 systemd-networkd[1428]: eth0: DHCPv4 address 172.236.108.24/24, gateway 172.236.108.1 acquired from 23.40.197.105 Mar 13 00:39:38.042370 dbus-daemon[1512]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1428 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 00:39:38.042827 systemd-timesyncd[1457]: Network configuration changed, trying to establish connection. Mar 13 00:39:38.053195 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 00:39:38.086740 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 13 00:39:38.093825 coreos-metadata[1588]: Mar 13 00:39:38.093 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 13 00:39:38.100466 containerd[1551]: time="2026-03-13T00:39:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:39:38.103467 containerd[1551]: time="2026-03-13T00:39:38.103161110Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:39:38.104274 extend-filesystems[1562]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 13 00:39:38.104274 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 13 00:39:38.104274 extend-filesystems[1562]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 13 00:39:38.116050 extend-filesystems[1515]: Resized filesystem in /dev/sda9 Mar 13 00:39:38.105899 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:39:38.106180 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:39:38.134214 containerd[1551]: time="2026-03-13T00:39:38.133529011Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.24µs" Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.134786242Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.134822022Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135018962Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135034672Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135072602Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135148702Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135160822Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135394253Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135407683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135420663Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135430723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:39:38.135853 containerd[1551]: time="2026-03-13T00:39:38.135519353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:39:38.139811 containerd[1551]: time="2026-03-13T00:39:38.138923456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:39:38.139811 containerd[1551]: time="2026-03-13T00:39:38.139027166Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:39:38.139811 containerd[1551]: time="2026-03-13T00:39:38.139050586Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:39:38.139811 containerd[1551]: time="2026-03-13T00:39:38.139119056Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:39:38.139811 containerd[1551]: time="2026-03-13T00:39:38.139491997Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:39:38.139811 containerd[1551]: time="2026-03-13T00:39:38.139565457Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.146942454Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147000244Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147016864Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147040824Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147054774Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147066414Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147080494Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147095974Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147114344Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147127034Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147136984Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:39:38.147165 containerd[1551]: time="2026-03-13T00:39:38.147150594Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147272285Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147294535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147310165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147322435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147332515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147345285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147358795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147370725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147382225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:39:38.147393 containerd[1551]: time="2026-03-13T00:39:38.147393185Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:39:38.147719 containerd[1551]: time="2026-03-13T00:39:38.147412505Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:39:38.147719 containerd[1551]: time="2026-03-13T00:39:38.147468495Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:39:38.147719 containerd[1551]: time="2026-03-13T00:39:38.147484275Z" level=info msg="Start snapshots syncer" Mar 13 00:39:38.147719 containerd[1551]: time="2026-03-13T00:39:38.147520655Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:39:38.148183 containerd[1551]: time="2026-03-13T00:39:38.147836835Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:39:38.148183 containerd[1551]: time="2026-03-13T00:39:38.147894915Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154673512Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154803472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154829002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154848822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154859482Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154873362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154886842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154898632Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154931542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154945052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154955582Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154978282Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.154994052Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:39:38.156101 containerd[1551]: time="2026-03-13T00:39:38.155003862Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155014152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155042482Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155054042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155073532Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155093082Z" level=info msg="runtime interface created" Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155117632Z" level=info msg="created NRI interface" Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155127262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155138862Z" level=info msg="Connect containerd service" Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155160372Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:39:38.156406 containerd[1551]: time="2026-03-13T00:39:38.155985253Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:39:38.194668 coreos-metadata[1588]: Mar 13 00:39:38.193 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 13 00:39:38.236170 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:39:38.259657 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 00:39:38.263311 dbus-daemon[1512]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 00:39:38.263888 dbus-daemon[1512]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1593 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 00:39:38.264361 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:39:38.270156 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 00:39:38.309712 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:39:38.315395 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:39:38.328007 coreos-metadata[1588]: Mar 13 00:39:38.327 INFO Fetch successful Mar 13 00:39:38.344652 containerd[1551]: time="2026-03-13T00:39:38.344382012Z" level=info msg="Start subscribing containerd event" Mar 13 00:39:38.344652 containerd[1551]: time="2026-03-13T00:39:38.344473722Z" level=info msg="Start recovering state" Mar 13 00:39:38.345297 containerd[1551]: time="2026-03-13T00:39:38.344609192Z" level=info msg="Start event monitor" Mar 13 00:39:38.345297 containerd[1551]: time="2026-03-13T00:39:38.344983192Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:39:38.345297 containerd[1551]: time="2026-03-13T00:39:38.344997852Z" level=info msg="Start streaming server" Mar 13 00:39:38.345297 containerd[1551]: time="2026-03-13T00:39:38.345013032Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:39:38.345297 containerd[1551]: time="2026-03-13T00:39:38.345056222Z" level=info msg="runtime interface starting up..." Mar 13 00:39:38.345297 containerd[1551]: time="2026-03-13T00:39:38.345068352Z" level=info msg="starting plugins..." Mar 13 00:39:38.345297 containerd[1551]: time="2026-03-13T00:39:38.345095652Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:39:38.349667 containerd[1551]: time="2026-03-13T00:39:38.348223275Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:39:38.349667 containerd[1551]: time="2026-03-13T00:39:38.348295826Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:39:38.349667 containerd[1551]: time="2026-03-13T00:39:38.348349086Z" level=info msg="containerd successfully booted in 0.250222s" Mar 13 00:39:38.348682 systemd-timesyncd[1457]: Contacted time server 38.45.67.7:123 (0.flatcar.pool.ntp.org). Mar 13 00:39:38.348765 systemd-timesyncd[1457]: Initial clock synchronization to Fri 2026-03-13 00:39:38.738315 UTC. Mar 13 00:39:38.348850 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:39:38.349232 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:39:38.351603 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:39:38.357174 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:39:38.381895 update-ssh-keys[1627]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:39:38.383800 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:39:38.391729 systemd[1]: Finished sshkeys.service. Mar 13 00:39:38.397026 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:39:38.403363 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:39:38.406232 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:39:38.407308 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:39:38.448167 polkitd[1616]: Started polkitd version 126 Mar 13 00:39:38.452779 polkitd[1616]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 00:39:38.453285 polkitd[1616]: Loading rules from directory /run/polkit-1/rules.d Mar 13 00:39:38.453342 polkitd[1616]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:39:38.453548 polkitd[1616]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 00:39:38.453579 polkitd[1616]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:39:38.453641 polkitd[1616]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 00:39:38.454517 polkitd[1616]: Finished loading, compiling and executing 2 rules Mar 13 00:39:38.455081 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 00:39:38.455175 dbus-daemon[1512]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 00:39:38.455693 polkitd[1616]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 00:39:38.463142 tar[1534]: linux-amd64/README.md Mar 13 00:39:38.467556 systemd-hostnamed[1593]: Hostname set to <172-236-108-24> (transient) Mar 13 00:39:38.468118 systemd-resolved[1429]: System hostname changed to '172-236-108-24'. Mar 13 00:39:38.488777 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:39:38.591379 coreos-metadata[1511]: Mar 13 00:39:38.591 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 13 00:39:38.683954 coreos-metadata[1511]: Mar 13 00:39:38.683 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 13 00:39:38.928187 coreos-metadata[1511]: Mar 13 00:39:38.928 INFO Fetch successful Mar 13 00:39:38.928187 coreos-metadata[1511]: Mar 13 00:39:38.928 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 13 00:39:39.192096 systemd-networkd[1428]: eth0: Gained IPv6LL Mar 13 00:39:39.195157 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:39:39.196068 coreos-metadata[1511]: Mar 13 00:39:39.196 INFO Fetch successful Mar 13 00:39:39.202294 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:39:39.208152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:39.212032 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:39:39.249619 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:39:39.326567 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:39:39.328130 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:39:40.168808 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:39:40.173747 systemd[1]: Started sshd@0-172.236.108.24:22-68.220.241.50:38520.service - OpenSSH per-connection server daemon (68.220.241.50:38520). Mar 13 00:39:40.176786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:40.178650 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:39:40.181849 systemd[1]: Startup finished in 2.931s (kernel) + 8.864s (initrd) + 5.717s (userspace) = 17.514s. Mar 13 00:39:40.250704 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:39:40.351171 sshd[1686]: Accepted publickey for core from 68.220.241.50 port 38520 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:40.355138 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:40.364490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:39:40.367422 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:39:40.380298 systemd-logind[1523]: New session 1 of user core. Mar 13 00:39:40.389080 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:39:40.393893 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:39:40.406138 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:39:40.409295 systemd-logind[1523]: New session c1 of user core. Mar 13 00:39:40.547942 systemd[1701]: Queued start job for default target default.target. Mar 13 00:39:40.552141 systemd[1701]: Created slice app.slice - User Application Slice. Mar 13 00:39:40.552165 systemd[1701]: Reached target paths.target - Paths. Mar 13 00:39:40.552206 systemd[1701]: Reached target timers.target - Timers. Mar 13 00:39:40.555751 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:39:40.577630 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:39:40.577812 systemd[1701]: Reached target sockets.target - Sockets. Mar 13 00:39:40.577863 systemd[1701]: Reached target basic.target - Basic System. Mar 13 00:39:40.577910 systemd[1701]: Reached target default.target - Main User Target. Mar 13 00:39:40.577944 systemd[1701]: Startup finished in 159ms. Mar 13 00:39:40.578090 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:39:40.587770 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:39:40.672158 systemd[1]: Started sshd@1-172.236.108.24:22-68.220.241.50:38522.service - OpenSSH per-connection server daemon (68.220.241.50:38522). Mar 13 00:39:40.768114 kubelet[1687]: E0313 00:39:40.768079 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:39:40.772589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:39:40.772849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:39:40.773409 systemd[1]: kubelet.service: Consumed 832ms CPU time, 257.1M memory peak. Mar 13 00:39:40.838188 sshd[1712]: Accepted publickey for core from 68.220.241.50 port 38522 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:40.840022 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:40.846708 systemd-logind[1523]: New session 2 of user core. Mar 13 00:39:40.854789 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:39:40.913375 sshd[1717]: Connection closed by 68.220.241.50 port 38522 Mar 13 00:39:40.914877 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:40.921574 systemd[1]: sshd@1-172.236.108.24:22-68.220.241.50:38522.service: Deactivated successfully. Mar 13 00:39:40.924515 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:39:40.925383 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:39:40.927476 systemd-logind[1523]: Removed session 2. Mar 13 00:39:40.954896 systemd[1]: Started sshd@2-172.236.108.24:22-68.220.241.50:38528.service - OpenSSH per-connection server daemon (68.220.241.50:38528). Mar 13 00:39:41.151071 sshd[1723]: Accepted publickey for core from 68.220.241.50 port 38528 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:41.152497 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:41.158712 systemd-logind[1523]: New session 3 of user core. Mar 13 00:39:41.168788 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:39:41.235432 sshd[1726]: Connection closed by 68.220.241.50 port 38528 Mar 13 00:39:41.235999 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:41.240332 systemd[1]: sshd@2-172.236.108.24:22-68.220.241.50:38528.service: Deactivated successfully. Mar 13 00:39:41.242355 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:39:41.243586 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:39:41.244602 systemd-logind[1523]: Removed session 3. Mar 13 00:39:41.279718 systemd[1]: Started sshd@3-172.236.108.24:22-68.220.241.50:38544.service - OpenSSH per-connection server daemon (68.220.241.50:38544). Mar 13 00:39:41.434769 sshd[1732]: Accepted publickey for core from 68.220.241.50 port 38544 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:41.436708 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:41.441699 systemd-logind[1523]: New session 4 of user core. Mar 13 00:39:41.446929 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:39:41.506375 sshd[1735]: Connection closed by 68.220.241.50 port 38544 Mar 13 00:39:41.507831 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:41.511476 systemd[1]: sshd@3-172.236.108.24:22-68.220.241.50:38544.service: Deactivated successfully. Mar 13 00:39:41.515058 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:39:41.518268 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:39:41.519571 systemd-logind[1523]: Removed session 4. Mar 13 00:39:41.539897 systemd[1]: Started sshd@4-172.236.108.24:22-68.220.241.50:38548.service - OpenSSH per-connection server daemon (68.220.241.50:38548). Mar 13 00:39:41.708478 sshd[1741]: Accepted publickey for core from 68.220.241.50 port 38548 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:41.710530 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:41.716224 systemd-logind[1523]: New session 5 of user core. Mar 13 00:39:41.721784 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:39:41.768109 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:39:41.768439 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:41.783710 sudo[1745]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:41.805749 sshd[1744]: Connection closed by 68.220.241.50 port 38548 Mar 13 00:39:41.807573 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:41.811856 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:39:41.812181 systemd[1]: sshd@4-172.236.108.24:22-68.220.241.50:38548.service: Deactivated successfully. Mar 13 00:39:41.814320 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:39:41.815972 systemd-logind[1523]: Removed session 5. Mar 13 00:39:41.853382 systemd[1]: Started sshd@5-172.236.108.24:22-68.220.241.50:42836.service - OpenSSH per-connection server daemon (68.220.241.50:42836). Mar 13 00:39:42.031763 sshd[1752]: Accepted publickey for core from 68.220.241.50 port 42836 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:42.032981 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:42.038995 systemd-logind[1523]: New session 6 of user core. Mar 13 00:39:42.047784 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:39:42.090761 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:39:42.091086 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:42.096519 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:42.102731 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:39:42.103042 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:42.114142 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:39:42.154480 augenrules[1779]: No rules Mar 13 00:39:42.156311 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:39:42.156582 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:39:42.158124 sudo[1756]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:42.184358 sshd[1755]: Connection closed by 68.220.241.50 port 42836 Mar 13 00:39:42.184821 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:42.189174 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:39:42.189337 systemd[1]: sshd@5-172.236.108.24:22-68.220.241.50:42836.service: Deactivated successfully. Mar 13 00:39:42.191221 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:39:42.193132 systemd-logind[1523]: Removed session 6. Mar 13 00:39:42.215062 systemd[1]: Started sshd@6-172.236.108.24:22-68.220.241.50:42852.service - OpenSSH per-connection server daemon (68.220.241.50:42852). Mar 13 00:39:42.363740 sshd[1788]: Accepted publickey for core from 68.220.241.50 port 42852 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:39:42.365697 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:42.371579 systemd-logind[1523]: New session 7 of user core. Mar 13 00:39:42.386788 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:39:42.420017 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:39:42.420343 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:42.723510 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:39:42.738096 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:39:42.969443 dockerd[1810]: time="2026-03-13T00:39:42.969384853Z" level=info msg="Starting up" Mar 13 00:39:42.971743 dockerd[1810]: time="2026-03-13T00:39:42.971723968Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:39:42.982292 dockerd[1810]: time="2026-03-13T00:39:42.982212190Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:39:42.996976 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3831430927-merged.mount: Deactivated successfully. Mar 13 00:39:43.068862 dockerd[1810]: time="2026-03-13T00:39:43.068827602Z" level=info msg="Loading containers: start." Mar 13 00:39:43.080658 kernel: Initializing XFRM netlink socket Mar 13 00:39:43.360761 systemd-networkd[1428]: docker0: Link UP Mar 13 00:39:43.364452 dockerd[1810]: time="2026-03-13T00:39:43.364414076Z" level=info msg="Loading containers: done." Mar 13 00:39:43.378338 dockerd[1810]: time="2026-03-13T00:39:43.378297940Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:39:43.378711 dockerd[1810]: time="2026-03-13T00:39:43.378358928Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:39:43.378711 dockerd[1810]: time="2026-03-13T00:39:43.378628996Z" level=info msg="Initializing buildkit" Mar 13 00:39:43.401069 dockerd[1810]: time="2026-03-13T00:39:43.401038674Z" level=info msg="Completed buildkit initialization" Mar 13 00:39:43.411714 dockerd[1810]: time="2026-03-13T00:39:43.411613571Z" level=info msg="Daemon has completed initialization" Mar 13 00:39:43.412231 dockerd[1810]: time="2026-03-13T00:39:43.412176431Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:39:43.412849 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:39:43.877529 containerd[1551]: time="2026-03-13T00:39:43.877490374Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:39:44.529079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427022203.mount: Deactivated successfully. Mar 13 00:39:45.788931 containerd[1551]: time="2026-03-13T00:39:45.788888087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:45.790397 containerd[1551]: time="2026-03-13T00:39:45.790377910Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074503" Mar 13 00:39:45.790942 containerd[1551]: time="2026-03-13T00:39:45.790902014Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:45.793203 containerd[1551]: time="2026-03-13T00:39:45.793168621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:45.794661 containerd[1551]: time="2026-03-13T00:39:45.794231913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.91670228s" Mar 13 00:39:45.794661 containerd[1551]: time="2026-03-13T00:39:45.794260905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:39:45.795370 containerd[1551]: time="2026-03-13T00:39:45.795334620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:39:47.131938 containerd[1551]: time="2026-03-13T00:39:47.131875277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:47.133175 containerd[1551]: time="2026-03-13T00:39:47.132749154Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165829" Mar 13 00:39:47.133683 containerd[1551]: time="2026-03-13T00:39:47.133650016Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:47.136071 containerd[1551]: time="2026-03-13T00:39:47.136032976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:47.137458 containerd[1551]: time="2026-03-13T00:39:47.137409115Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.34204932s" Mar 13 00:39:47.137458 containerd[1551]: time="2026-03-13T00:39:47.137455582Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:39:47.138590 containerd[1551]: time="2026-03-13T00:39:47.138567844Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:39:48.104838 containerd[1551]: time="2026-03-13T00:39:48.104783710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:48.105513 containerd[1551]: time="2026-03-13T00:39:48.105491871Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729830" Mar 13 00:39:48.106019 containerd[1551]: time="2026-03-13T00:39:48.105984167Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:48.113198 containerd[1551]: time="2026-03-13T00:39:48.113168948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:48.114883 containerd[1551]: time="2026-03-13T00:39:48.114852996Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 976.160761ms" Mar 13 00:39:48.114883 containerd[1551]: time="2026-03-13T00:39:48.114877186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:39:48.115267 containerd[1551]: time="2026-03-13T00:39:48.115248084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:39:49.132121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191649585.mount: Deactivated successfully. Mar 13 00:39:49.387644 containerd[1551]: time="2026-03-13T00:39:49.387518248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:49.388884 containerd[1551]: time="2026-03-13T00:39:49.388737742Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861776" Mar 13 00:39:49.389393 containerd[1551]: time="2026-03-13T00:39:49.389366825Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:49.390691 containerd[1551]: time="2026-03-13T00:39:49.390670284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:49.391176 containerd[1551]: time="2026-03-13T00:39:49.391148937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.275877466s" Mar 13 00:39:49.391219 containerd[1551]: time="2026-03-13T00:39:49.391176993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:39:49.391810 containerd[1551]: time="2026-03-13T00:39:49.391786123Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:39:49.889823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054802600.mount: Deactivated successfully. Mar 13 00:39:50.615675 containerd[1551]: time="2026-03-13T00:39:50.615634686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:50.616820 containerd[1551]: time="2026-03-13T00:39:50.616794746Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Mar 13 00:39:50.617485 containerd[1551]: time="2026-03-13T00:39:50.617437828Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:50.620896 containerd[1551]: time="2026-03-13T00:39:50.620720129Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.228908665s" Mar 13 00:39:50.620896 containerd[1551]: time="2026-03-13T00:39:50.620748383Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:39:50.620896 containerd[1551]: time="2026-03-13T00:39:50.620833854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:50.621342 containerd[1551]: time="2026-03-13T00:39:50.621293132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:39:51.026234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:39:51.029765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:51.147784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915472946.mount: Deactivated successfully. Mar 13 00:39:51.155327 containerd[1551]: time="2026-03-13T00:39:51.154786450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:51.156029 containerd[1551]: time="2026-03-13T00:39:51.156009629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Mar 13 00:39:51.156549 containerd[1551]: time="2026-03-13T00:39:51.156530363Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:51.158971 containerd[1551]: time="2026-03-13T00:39:51.158952525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:51.159432 containerd[1551]: time="2026-03-13T00:39:51.159331933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 538.014731ms" Mar 13 00:39:51.159896 containerd[1551]: time="2026-03-13T00:39:51.159880416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:39:51.160421 containerd[1551]: time="2026-03-13T00:39:51.160291159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:39:51.225289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:51.237683 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:39:51.282224 kubelet[2158]: E0313 00:39:51.282128 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:39:51.287247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:39:51.287448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:39:51.287858 systemd[1]: kubelet.service: Consumed 205ms CPU time, 109.3M memory peak. Mar 13 00:39:51.678791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987647861.mount: Deactivated successfully. Mar 13 00:39:52.281694 containerd[1551]: time="2026-03-13T00:39:52.281656837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:52.282461 containerd[1551]: time="2026-03-13T00:39:52.282437209Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860680" Mar 13 00:39:52.282917 containerd[1551]: time="2026-03-13T00:39:52.282880160Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:52.285098 containerd[1551]: time="2026-03-13T00:39:52.285063923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:52.285968 containerd[1551]: time="2026-03-13T00:39:52.285873493Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.125288358s" Mar 13 00:39:52.285968 containerd[1551]: time="2026-03-13T00:39:52.285898265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:39:55.086596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:55.086763 systemd[1]: kubelet.service: Consumed 205ms CPU time, 109.3M memory peak. Mar 13 00:39:55.088881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:55.123594 systemd[1]: Reload requested from client PID 2255 ('systemctl') (unit session-7.scope)... Mar 13 00:39:55.123607 systemd[1]: Reloading... Mar 13 00:39:55.231692 zram_generator::config[2299]: No configuration found. Mar 13 00:39:55.444673 systemd[1]: Reloading finished in 320 ms. Mar 13 00:39:55.507147 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:39:55.507251 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:39:55.507664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:55.507708 systemd[1]: kubelet.service: Consumed 129ms CPU time, 98.2M memory peak. Mar 13 00:39:55.509811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:55.682200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:55.689935 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:39:55.725870 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:39:55.725870 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:39:55.725870 kubelet[2353]: I0313 00:39:55.724364 2353 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:39:56.444732 kubelet[2353]: I0313 00:39:56.444698 2353 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:39:56.444732 kubelet[2353]: I0313 00:39:56.444724 2353 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:39:56.444876 kubelet[2353]: I0313 00:39:56.444751 2353 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:39:56.444876 kubelet[2353]: I0313 00:39:56.444761 2353 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:39:56.444965 kubelet[2353]: I0313 00:39:56.444952 2353 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:39:56.449069 kubelet[2353]: E0313 00:39:56.449037 2353 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.108.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.108.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:39:56.449297 kubelet[2353]: I0313 00:39:56.449211 2353 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:39:56.453792 kubelet[2353]: I0313 00:39:56.453777 2353 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:39:56.459145 kubelet[2353]: I0313 00:39:56.459131 2353 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:39:56.460316 kubelet[2353]: I0313 00:39:56.459957 2353 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:39:56.460316 kubelet[2353]: I0313 00:39:56.459980 2353 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-108-24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:39:56.460316 kubelet[2353]: I0313 00:39:56.460090 2353 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:39:56.460316 kubelet[2353]: I0313 00:39:56.460098 2353 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:39:56.460520 kubelet[2353]: I0313 00:39:56.460170 2353 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:39:56.462253 kubelet[2353]: I0313 00:39:56.462239 2353 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:56.462505 kubelet[2353]: I0313 00:39:56.462493 2353 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:39:56.462565 kubelet[2353]: I0313 00:39:56.462556 2353 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:39:56.462640 kubelet[2353]: I0313 00:39:56.462630 2353 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:39:56.462707 kubelet[2353]: I0313 00:39:56.462698 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:39:56.466148 kubelet[2353]: E0313 00:39:56.466127 2353 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.108.24:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-108-24&limit=500&resourceVersion=0\": dial tcp 172.236.108.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:56.466408 kubelet[2353]: I0313 00:39:56.466393 2353 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:39:56.466868 kubelet[2353]: I0313 00:39:56.466854 2353 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:39:56.467456 kubelet[2353]: I0313 00:39:56.466940 2353 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:39:56.467456 kubelet[2353]: W0313 00:39:56.466983 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:39:56.469208 kubelet[2353]: E0313 00:39:56.469183 2353 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.108.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.108.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:56.470593 kubelet[2353]: I0313 00:39:56.470582 2353 server.go:1262] "Started kubelet" Mar 13 00:39:56.472066 kubelet[2353]: I0313 00:39:56.472052 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:39:56.475020 kubelet[2353]: E0313 00:39:56.474023 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.108.24:6443/api/v1/namespaces/default/events\": dial tcp 172.236.108.24:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-108-24.189c3fb433a99645 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-108-24,UID:172-236-108-24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-108-24,},FirstTimestamp:2026-03-13 00:39:56.470548037 +0000 UTC m=+0.777372016,LastTimestamp:2026-03-13 00:39:56.470548037 +0000 UTC m=+0.777372016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-108-24,}" Mar 13 00:39:56.475453 kubelet[2353]: I0313 00:39:56.475257 2353 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:39:56.476458 kubelet[2353]: I0313 00:39:56.476431 2353 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:39:56.479600 kubelet[2353]: I0313 00:39:56.479569 2353 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:39:56.479652 kubelet[2353]: I0313 00:39:56.479611 2353 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:39:56.480675 kubelet[2353]: I0313 00:39:56.479937 2353 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:39:56.480675 kubelet[2353]: I0313 00:39:56.480101 2353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:39:56.482897 kubelet[2353]: E0313 00:39:56.482876 2353 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-108-24\" not found" Mar 13 00:39:56.482943 kubelet[2353]: I0313 00:39:56.482907 2353 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:39:56.483041 kubelet[2353]: I0313 00:39:56.483022 2353 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:39:56.483077 kubelet[2353]: I0313 00:39:56.483063 2353 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:39:56.483338 kubelet[2353]: E0313 00:39:56.483315 2353 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.108.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.108.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:56.483565 kubelet[2353]: I0313 00:39:56.483542 2353 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:39:56.483676 kubelet[2353]: I0313 00:39:56.483608 2353 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:39:56.483962 kubelet[2353]: E0313 00:39:56.483939 2353 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:39:56.484170 kubelet[2353]: E0313 00:39:56.484139 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-24?timeout=10s\": dial tcp 172.236.108.24:6443: connect: connection refused" interval="200ms" Mar 13 00:39:56.484656 kubelet[2353]: I0313 00:39:56.484595 2353 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:39:56.495177 kubelet[2353]: I0313 00:39:56.495154 2353 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:39:56.496453 kubelet[2353]: I0313 00:39:56.496439 2353 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:39:56.496523 kubelet[2353]: I0313 00:39:56.496513 2353 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:39:56.496582 kubelet[2353]: I0313 00:39:56.496573 2353 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:39:56.496696 kubelet[2353]: E0313 00:39:56.496676 2353 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:39:56.503974 kubelet[2353]: E0313 00:39:56.503957 2353 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.108.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.108.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:56.516387 kubelet[2353]: I0313 00:39:56.516359 2353 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:39:56.516713 kubelet[2353]: I0313 00:39:56.516701 2353 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:39:56.516794 kubelet[2353]: I0313 00:39:56.516785 2353 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:56.518435 kubelet[2353]: I0313 00:39:56.518413 2353 policy_none.go:49] "None policy: Start" Mar 13 00:39:56.518559 kubelet[2353]: I0313 00:39:56.518494 2353 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:39:56.518559 kubelet[2353]: I0313 00:39:56.518508 2353 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:39:56.519362 kubelet[2353]: I0313 00:39:56.519350 2353 policy_none.go:47] "Start" Mar 13 00:39:56.523512 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:39:56.532483 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:39:56.535922 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:39:56.547530 kubelet[2353]: E0313 00:39:56.547508 2353 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:39:56.547943 kubelet[2353]: I0313 00:39:56.547767 2353 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:39:56.548405 kubelet[2353]: I0313 00:39:56.548380 2353 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:39:56.549533 kubelet[2353]: I0313 00:39:56.549451 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:39:56.550531 kubelet[2353]: E0313 00:39:56.550488 2353 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:39:56.550744 kubelet[2353]: E0313 00:39:56.550724 2353 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-108-24\" not found" Mar 13 00:39:56.606871 systemd[1]: Created slice kubepods-burstable-pod30a2e9dc8638c962ae30f4f6fdd4daea.slice - libcontainer container kubepods-burstable-pod30a2e9dc8638c962ae30f4f6fdd4daea.slice. Mar 13 00:39:56.614898 kubelet[2353]: E0313 00:39:56.614715 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:56.616900 systemd[1]: Created slice kubepods-burstable-pod5b2759688f07ab7a04febe620a512b11.slice - libcontainer container kubepods-burstable-pod5b2759688f07ab7a04febe620a512b11.slice. Mar 13 00:39:56.628991 kubelet[2353]: E0313 00:39:56.628966 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:56.632286 systemd[1]: Created slice kubepods-burstable-pod625ccde0493eb355c8c4c4cd66360f5c.slice - libcontainer container kubepods-burstable-pod625ccde0493eb355c8c4c4cd66360f5c.slice. Mar 13 00:39:56.634212 kubelet[2353]: E0313 00:39:56.634192 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:56.650279 kubelet[2353]: I0313 00:39:56.650252 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-24" Mar 13 00:39:56.650524 kubelet[2353]: E0313 00:39:56.650498 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.108.24:6443/api/v1/nodes\": dial tcp 172.236.108.24:6443: connect: connection refused" node="172-236-108-24" Mar 13 00:39:56.685022 kubelet[2353]: E0313 00:39:56.685000 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-24?timeout=10s\": dial tcp 172.236.108.24:6443: connect: connection refused" interval="400ms" Mar 13 00:39:56.784485 kubelet[2353]: I0313 00:39:56.784384 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-flexvolume-dir\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:39:56.784485 kubelet[2353]: I0313 00:39:56.784431 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-k8s-certs\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:39:56.784485 kubelet[2353]: I0313 00:39:56.784453 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:39:56.784485 kubelet[2353]: I0313 00:39:56.784474 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/625ccde0493eb355c8c4c4cd66360f5c-kubeconfig\") pod \"kube-scheduler-172-236-108-24\" (UID: \"625ccde0493eb355c8c4c4cd66360f5c\") " pod="kube-system/kube-scheduler-172-236-108-24" Mar 13 00:39:56.785733 kubelet[2353]: I0313 00:39:56.784491 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30a2e9dc8638c962ae30f4f6fdd4daea-ca-certs\") pod \"kube-apiserver-172-236-108-24\" (UID: \"30a2e9dc8638c962ae30f4f6fdd4daea\") " pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:39:56.785733 kubelet[2353]: I0313 00:39:56.784508 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30a2e9dc8638c962ae30f4f6fdd4daea-k8s-certs\") pod \"kube-apiserver-172-236-108-24\" (UID: \"30a2e9dc8638c962ae30f4f6fdd4daea\") " pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:39:56.785733 kubelet[2353]: I0313 00:39:56.784525 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30a2e9dc8638c962ae30f4f6fdd4daea-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-108-24\" (UID: \"30a2e9dc8638c962ae30f4f6fdd4daea\") " pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:39:56.785733 kubelet[2353]: I0313 00:39:56.784557 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-ca-certs\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:39:56.785733 kubelet[2353]: I0313 00:39:56.784574 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-kubeconfig\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:39:56.852552 kubelet[2353]: I0313 00:39:56.852527 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-24" Mar 13 00:39:56.852756 kubelet[2353]: E0313 00:39:56.852729 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.108.24:6443/api/v1/nodes\": dial tcp 172.236.108.24:6443: connect: connection refused" node="172-236-108-24" Mar 13 00:39:56.917284 kubelet[2353]: E0313 00:39:56.916994 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:56.917759 containerd[1551]: time="2026-03-13T00:39:56.917721590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-108-24,Uid:30a2e9dc8638c962ae30f4f6fdd4daea,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:56.931092 kubelet[2353]: E0313 00:39:56.931071 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:56.931577 containerd[1551]: time="2026-03-13T00:39:56.931386467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-108-24,Uid:5b2759688f07ab7a04febe620a512b11,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:56.935490 kubelet[2353]: E0313 00:39:56.935473 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:56.935801 containerd[1551]: time="2026-03-13T00:39:56.935771249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-108-24,Uid:625ccde0493eb355c8c4c4cd66360f5c,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:57.085724 kubelet[2353]: E0313 00:39:57.085613 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-24?timeout=10s\": dial tcp 172.236.108.24:6443: connect: connection refused" interval="800ms" Mar 13 00:39:57.254665 kubelet[2353]: I0313 00:39:57.254608 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-24" Mar 13 00:39:57.254852 kubelet[2353]: E0313 00:39:57.254833 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.108.24:6443/api/v1/nodes\": dial tcp 172.236.108.24:6443: connect: connection refused" node="172-236-108-24" Mar 13 00:39:57.399023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567296204.mount: Deactivated successfully. Mar 13 00:39:57.403386 containerd[1551]: time="2026-03-13T00:39:57.403352594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:57.404583 containerd[1551]: time="2026-03-13T00:39:57.404390427Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:57.405443 containerd[1551]: time="2026-03-13T00:39:57.405425378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Mar 13 00:39:57.405834 containerd[1551]: time="2026-03-13T00:39:57.405810662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:39:57.407253 containerd[1551]: time="2026-03-13T00:39:57.407218162Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:57.408642 containerd[1551]: time="2026-03-13T00:39:57.407878456Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:57.408642 containerd[1551]: time="2026-03-13T00:39:57.408020862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:39:57.409530 containerd[1551]: time="2026-03-13T00:39:57.409509668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:57.411537 containerd[1551]: time="2026-03-13T00:39:57.411517668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 475.056508ms" Mar 13 00:39:57.414446 containerd[1551]: time="2026-03-13T00:39:57.414411945Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.860907ms" Mar 13 00:39:57.415244 containerd[1551]: time="2026-03-13T00:39:57.414901719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 482.795581ms" Mar 13 00:39:57.427381 containerd[1551]: time="2026-03-13T00:39:57.427355462Z" level=info msg="connecting to shim dd1746e1f598335b2ac3df8074135b7303f298107c30dc1aef513ba018592b72" address="unix:///run/containerd/s/c61ede07e4a195a203b64d23cef8c125934a51a4afa6e856ca7bca4a7eef8458" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:57.453154 containerd[1551]: time="2026-03-13T00:39:57.453112373Z" level=info msg="connecting to shim 6da8b5faa461c20e8e166c8a7bb9c19aa37086e07b5a5254151b9a0c8c0860ba" address="unix:///run/containerd/s/85a71100381d473b2b0c461c29098e94ffc6c338f9b2441dd2d85651396b57b4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:57.458169 containerd[1551]: time="2026-03-13T00:39:57.458144822Z" level=info msg="connecting to shim 705a0bad9d189229fa7b0e6dba18b9a35b532c5d2b36d10a550e485e9bba60b4" address="unix:///run/containerd/s/c7a69b8a00be183d6f2a5ba5c13b1877a22d42f9c957df06dc914bc315484326" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:57.471920 systemd[1]: Started cri-containerd-dd1746e1f598335b2ac3df8074135b7303f298107c30dc1aef513ba018592b72.scope - libcontainer container dd1746e1f598335b2ac3df8074135b7303f298107c30dc1aef513ba018592b72. Mar 13 00:39:57.493481 kubelet[2353]: E0313 00:39:57.491933 2353 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.108.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.108.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:57.492027 systemd[1]: Started cri-containerd-6da8b5faa461c20e8e166c8a7bb9c19aa37086e07b5a5254151b9a0c8c0860ba.scope - libcontainer container 6da8b5faa461c20e8e166c8a7bb9c19aa37086e07b5a5254151b9a0c8c0860ba. Mar 13 00:39:57.493312 systemd[1]: Started cri-containerd-705a0bad9d189229fa7b0e6dba18b9a35b532c5d2b36d10a550e485e9bba60b4.scope - libcontainer container 705a0bad9d189229fa7b0e6dba18b9a35b532c5d2b36d10a550e485e9bba60b4. Mar 13 00:39:57.564122 containerd[1551]: time="2026-03-13T00:39:57.564025010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-108-24,Uid:625ccde0493eb355c8c4c4cd66360f5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd1746e1f598335b2ac3df8074135b7303f298107c30dc1aef513ba018592b72\"" Mar 13 00:39:57.565162 kubelet[2353]: E0313 00:39:57.565136 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:57.569196 containerd[1551]: time="2026-03-13T00:39:57.569165946Z" level=info msg="CreateContainer within sandbox \"dd1746e1f598335b2ac3df8074135b7303f298107c30dc1aef513ba018592b72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:39:57.571369 containerd[1551]: time="2026-03-13T00:39:57.571262324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-108-24,Uid:5b2759688f07ab7a04febe620a512b11,Namespace:kube-system,Attempt:0,} returns sandbox id \"6da8b5faa461c20e8e166c8a7bb9c19aa37086e07b5a5254151b9a0c8c0860ba\"" Mar 13 00:39:57.572786 kubelet[2353]: E0313 00:39:57.572671 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:57.575451 containerd[1551]: time="2026-03-13T00:39:57.575432311Z" level=info msg="CreateContainer within sandbox \"6da8b5faa461c20e8e166c8a7bb9c19aa37086e07b5a5254151b9a0c8c0860ba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:39:57.576820 containerd[1551]: time="2026-03-13T00:39:57.576792583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-108-24,Uid:30a2e9dc8638c962ae30f4f6fdd4daea,Namespace:kube-system,Attempt:0,} returns sandbox id \"705a0bad9d189229fa7b0e6dba18b9a35b532c5d2b36d10a550e485e9bba60b4\"" Mar 13 00:39:57.577491 kubelet[2353]: E0313 00:39:57.577476 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:57.583118 containerd[1551]: time="2026-03-13T00:39:57.582836058Z" level=info msg="Container b73638bd4c4a6b1cf1fe6fb56d35d77e515fa182b8855a6f33a8e99ca64db7c6: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:57.584693 containerd[1551]: time="2026-03-13T00:39:57.584662408Z" level=info msg="CreateContainer within sandbox \"705a0bad9d189229fa7b0e6dba18b9a35b532c5d2b36d10a550e485e9bba60b4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:39:57.586831 containerd[1551]: time="2026-03-13T00:39:57.586813689Z" level=info msg="Container bce7ee8925302ffd500731fbaab4b8e9ed0206c2f7ac442dcbdcd8aba7b9117a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:57.590101 containerd[1551]: time="2026-03-13T00:39:57.590082432Z" level=info msg="CreateContainer within sandbox \"dd1746e1f598335b2ac3df8074135b7303f298107c30dc1aef513ba018592b72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b73638bd4c4a6b1cf1fe6fb56d35d77e515fa182b8855a6f33a8e99ca64db7c6\"" Mar 13 00:39:57.591725 containerd[1551]: time="2026-03-13T00:39:57.591309287Z" level=info msg="StartContainer for \"b73638bd4c4a6b1cf1fe6fb56d35d77e515fa182b8855a6f33a8e99ca64db7c6\"" Mar 13 00:39:57.593214 containerd[1551]: time="2026-03-13T00:39:57.593193543Z" level=info msg="CreateContainer within sandbox \"6da8b5faa461c20e8e166c8a7bb9c19aa37086e07b5a5254151b9a0c8c0860ba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bce7ee8925302ffd500731fbaab4b8e9ed0206c2f7ac442dcbdcd8aba7b9117a\"" Mar 13 00:39:57.594002 containerd[1551]: time="2026-03-13T00:39:57.593781257Z" level=info msg="connecting to shim b73638bd4c4a6b1cf1fe6fb56d35d77e515fa182b8855a6f33a8e99ca64db7c6" address="unix:///run/containerd/s/c61ede07e4a195a203b64d23cef8c125934a51a4afa6e856ca7bca4a7eef8458" protocol=ttrpc version=3 Mar 13 00:39:57.594938 containerd[1551]: time="2026-03-13T00:39:57.594920868Z" level=info msg="StartContainer for \"bce7ee8925302ffd500731fbaab4b8e9ed0206c2f7ac442dcbdcd8aba7b9117a\"" Mar 13 00:39:57.595742 containerd[1551]: time="2026-03-13T00:39:57.595726381Z" level=info msg="Container 88d7ed63db504af5b67c5fcce6eac6ba8d0f1993312b98333f1484c7dc3d5bf5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:57.598515 containerd[1551]: time="2026-03-13T00:39:57.598455270Z" level=info msg="connecting to shim bce7ee8925302ffd500731fbaab4b8e9ed0206c2f7ac442dcbdcd8aba7b9117a" address="unix:///run/containerd/s/85a71100381d473b2b0c461c29098e94ffc6c338f9b2441dd2d85651396b57b4" protocol=ttrpc version=3 Mar 13 00:39:57.602993 containerd[1551]: time="2026-03-13T00:39:57.602955298Z" level=info msg="CreateContainer within sandbox \"705a0bad9d189229fa7b0e6dba18b9a35b532c5d2b36d10a550e485e9bba60b4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88d7ed63db504af5b67c5fcce6eac6ba8d0f1993312b98333f1484c7dc3d5bf5\"" Mar 13 00:39:57.603361 containerd[1551]: time="2026-03-13T00:39:57.603327274Z" level=info msg="StartContainer for \"88d7ed63db504af5b67c5fcce6eac6ba8d0f1993312b98333f1484c7dc3d5bf5\"" Mar 13 00:39:57.604323 containerd[1551]: time="2026-03-13T00:39:57.604302562Z" level=info msg="connecting to shim 88d7ed63db504af5b67c5fcce6eac6ba8d0f1993312b98333f1484c7dc3d5bf5" address="unix:///run/containerd/s/c7a69b8a00be183d6f2a5ba5c13b1877a22d42f9c957df06dc914bc315484326" protocol=ttrpc version=3 Mar 13 00:39:57.628504 systemd[1]: Started cri-containerd-b73638bd4c4a6b1cf1fe6fb56d35d77e515fa182b8855a6f33a8e99ca64db7c6.scope - libcontainer container b73638bd4c4a6b1cf1fe6fb56d35d77e515fa182b8855a6f33a8e99ca64db7c6. Mar 13 00:39:57.635764 systemd[1]: Started cri-containerd-bce7ee8925302ffd500731fbaab4b8e9ed0206c2f7ac442dcbdcd8aba7b9117a.scope - libcontainer container bce7ee8925302ffd500731fbaab4b8e9ed0206c2f7ac442dcbdcd8aba7b9117a. Mar 13 00:39:57.643866 systemd[1]: Started cri-containerd-88d7ed63db504af5b67c5fcce6eac6ba8d0f1993312b98333f1484c7dc3d5bf5.scope - libcontainer container 88d7ed63db504af5b67c5fcce6eac6ba8d0f1993312b98333f1484c7dc3d5bf5. Mar 13 00:39:57.719888 containerd[1551]: time="2026-03-13T00:39:57.719265800Z" level=info msg="StartContainer for \"bce7ee8925302ffd500731fbaab4b8e9ed0206c2f7ac442dcbdcd8aba7b9117a\" returns successfully" Mar 13 00:39:57.736975 containerd[1551]: time="2026-03-13T00:39:57.736918324Z" level=info msg="StartContainer for \"88d7ed63db504af5b67c5fcce6eac6ba8d0f1993312b98333f1484c7dc3d5bf5\" returns successfully" Mar 13 00:39:57.737997 containerd[1551]: time="2026-03-13T00:39:57.737855264Z" level=info msg="StartContainer for \"b73638bd4c4a6b1cf1fe6fb56d35d77e515fa182b8855a6f33a8e99ca64db7c6\" returns successfully" Mar 13 00:39:57.753014 kubelet[2353]: E0313 00:39:57.752831 2353 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.108.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.108.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:58.056746 kubelet[2353]: I0313 00:39:58.056394 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-24" Mar 13 00:39:58.520299 kubelet[2353]: E0313 00:39:58.520018 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:58.520299 kubelet[2353]: E0313 00:39:58.520122 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:58.525668 kubelet[2353]: E0313 00:39:58.524191 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:58.526892 kubelet[2353]: E0313 00:39:58.526878 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:58.527077 kubelet[2353]: E0313 00:39:58.526333 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:58.527505 kubelet[2353]: E0313 00:39:58.527492 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:59.529332 kubelet[2353]: E0313 00:39:59.529160 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:59.529332 kubelet[2353]: E0313 00:39:59.529273 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:59.529956 kubelet[2353]: E0313 00:39:59.529829 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:59.529956 kubelet[2353]: E0313 00:39:59.529921 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:59.982559 kubelet[2353]: E0313 00:39:59.982349 2353 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:39:59.982559 kubelet[2353]: E0313 00:39:59.982456 2353 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:39:59.986089 kubelet[2353]: E0313 00:39:59.986065 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-108-24\" not found" node="172-236-108-24" Mar 13 00:40:00.030479 kubelet[2353]: E0313 00:40:00.030354 2353 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-236-108-24.189c3fb433a99645 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-108-24,UID:172-236-108-24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-108-24,},FirstTimestamp:2026-03-13 00:39:56.470548037 +0000 UTC m=+0.777372016,LastTimestamp:2026-03-13 00:39:56.470548037 +0000 UTC m=+0.777372016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-108-24,}" Mar 13 00:40:00.075650 kubelet[2353]: I0313 00:40:00.075220 2353 kubelet_node_status.go:78] "Successfully registered node" node="172-236-108-24" Mar 13 00:40:00.084303 kubelet[2353]: I0313 00:40:00.084063 2353 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-108-24" Mar 13 00:40:00.098657 kubelet[2353]: E0313 00:40:00.098639 2353 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-108-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-108-24" Mar 13 00:40:00.098718 kubelet[2353]: I0313 00:40:00.098710 2353 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:00.104137 kubelet[2353]: E0313 00:40:00.104122 2353 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-108-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:00.104208 kubelet[2353]: I0313 00:40:00.104198 2353 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:00.109021 kubelet[2353]: E0313 00:40:00.108999 2353 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-108-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:00.468376 kubelet[2353]: I0313 00:40:00.468019 2353 apiserver.go:52] "Watching apiserver" Mar 13 00:40:00.484017 kubelet[2353]: I0313 00:40:00.483988 2353 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:40:02.137612 systemd[1]: Reload requested from client PID 2634 ('systemctl') (unit session-7.scope)... Mar 13 00:40:02.137642 systemd[1]: Reloading... Mar 13 00:40:02.236662 zram_generator::config[2684]: No configuration found. Mar 13 00:40:02.436459 systemd[1]: Reloading finished in 298 ms. Mar 13 00:40:02.470527 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:02.487812 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:40:02.488098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:02.488150 systemd[1]: kubelet.service: Consumed 1.131s CPU time, 124.4M memory peak. Mar 13 00:40:02.490177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:02.653088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:02.660937 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:40:02.704349 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:40:02.704349 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:40:02.704349 kubelet[2729]: I0313 00:40:02.704087 2729 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:40:02.710390 kubelet[2729]: I0313 00:40:02.710363 2729 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:40:02.710390 kubelet[2729]: I0313 00:40:02.710383 2729 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:40:02.710480 kubelet[2729]: I0313 00:40:02.710404 2729 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:40:02.710480 kubelet[2729]: I0313 00:40:02.710414 2729 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:40:02.710630 kubelet[2729]: I0313 00:40:02.710596 2729 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:40:02.714604 kubelet[2729]: I0313 00:40:02.714576 2729 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:40:02.718641 kubelet[2729]: I0313 00:40:02.718594 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:40:02.722510 kubelet[2729]: I0313 00:40:02.722492 2729 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:40:02.726336 kubelet[2729]: I0313 00:40:02.726324 2729 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:40:02.726564 kubelet[2729]: I0313 00:40:02.726541 2729 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:40:02.726695 kubelet[2729]: I0313 00:40:02.726564 2729 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-108-24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:40:02.726791 kubelet[2729]: I0313 00:40:02.726696 2729 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:40:02.726791 kubelet[2729]: I0313 00:40:02.726705 2729 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:40:02.726791 kubelet[2729]: I0313 00:40:02.726728 2729 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:40:02.726888 kubelet[2729]: I0313 00:40:02.726874 2729 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:40:02.727089 kubelet[2729]: I0313 00:40:02.727077 2729 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:40:02.727115 kubelet[2729]: I0313 00:40:02.727090 2729 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:40:02.727115 kubelet[2729]: I0313 00:40:02.727108 2729 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:40:02.727179 kubelet[2729]: I0313 00:40:02.727116 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:40:02.730039 kubelet[2729]: I0313 00:40:02.729965 2729 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:40:02.730414 kubelet[2729]: I0313 00:40:02.730387 2729 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:40:02.730414 kubelet[2729]: I0313 00:40:02.730415 2729 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:40:02.732970 kubelet[2729]: I0313 00:40:02.732949 2729 server.go:1262] "Started kubelet" Mar 13 00:40:02.735566 kubelet[2729]: I0313 00:40:02.734791 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:40:02.743372 kubelet[2729]: I0313 00:40:02.743103 2729 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:40:02.745852 kubelet[2729]: I0313 00:40:02.745782 2729 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:40:02.747456 kubelet[2729]: I0313 00:40:02.747170 2729 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:40:02.747626 kubelet[2729]: I0313 00:40:02.747602 2729 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:40:02.752974 kubelet[2729]: I0313 00:40:02.752234 2729 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:40:02.752974 kubelet[2729]: I0313 00:40:02.752335 2729 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:40:02.753097 kubelet[2729]: I0313 00:40:02.753033 2729 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:40:02.753660 kubelet[2729]: I0313 00:40:02.753525 2729 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:40:02.753834 kubelet[2729]: I0313 00:40:02.753819 2729 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:40:02.754306 kubelet[2729]: I0313 00:40:02.754294 2729 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:40:02.755306 kubelet[2729]: I0313 00:40:02.755285 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:40:02.756278 kubelet[2729]: I0313 00:40:02.756261 2729 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:40:02.756278 kubelet[2729]: I0313 00:40:02.756279 2729 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:40:02.756373 kubelet[2729]: I0313 00:40:02.756300 2729 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:40:02.756373 kubelet[2729]: E0313 00:40:02.756340 2729 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:40:02.759502 kubelet[2729]: I0313 00:40:02.759488 2729 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:40:02.759569 kubelet[2729]: I0313 00:40:02.759560 2729 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:40:02.769750 kubelet[2729]: E0313 00:40:02.769723 2729 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:40:02.816308 kubelet[2729]: I0313 00:40:02.816259 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:40:02.816308 kubelet[2729]: I0313 00:40:02.816272 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:40:02.816566 kubelet[2729]: I0313 00:40:02.816288 2729 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:40:02.816983 kubelet[2729]: I0313 00:40:02.816940 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:40:02.817080 kubelet[2729]: I0313 00:40:02.817038 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:40:02.817228 kubelet[2729]: I0313 00:40:02.817122 2729 policy_none.go:49] "None policy: Start" Mar 13 00:40:02.817228 kubelet[2729]: I0313 00:40:02.817157 2729 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:40:02.817228 kubelet[2729]: I0313 00:40:02.817168 2729 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:40:02.817474 kubelet[2729]: I0313 00:40:02.817408 2729 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:40:02.817474 kubelet[2729]: I0313 00:40:02.817420 2729 policy_none.go:47] "Start" Mar 13 00:40:02.822649 kubelet[2729]: E0313 00:40:02.821863 2729 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:40:02.822649 kubelet[2729]: I0313 00:40:02.821999 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:40:02.822649 kubelet[2729]: I0313 00:40:02.822019 2729 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:40:02.822649 kubelet[2729]: I0313 00:40:02.822545 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:40:02.824038 kubelet[2729]: E0313 00:40:02.824023 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:40:02.857150 kubelet[2729]: I0313 00:40:02.857106 2729 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:02.857495 kubelet[2729]: I0313 00:40:02.857481 2729 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-108-24" Mar 13 00:40:02.860805 kubelet[2729]: I0313 00:40:02.860771 2729 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:02.933133 kubelet[2729]: I0313 00:40:02.933112 2729 kubelet_node_status.go:75] "Attempting to register node" node="172-236-108-24" Mar 13 00:40:02.939785 kubelet[2729]: I0313 00:40:02.939768 2729 kubelet_node_status.go:124] "Node was previously registered" node="172-236-108-24" Mar 13 00:40:02.939910 kubelet[2729]: I0313 00:40:02.939891 2729 kubelet_node_status.go:78] "Successfully registered node" node="172-236-108-24" Mar 13 00:40:02.953594 kubelet[2729]: I0313 00:40:02.953555 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30a2e9dc8638c962ae30f4f6fdd4daea-ca-certs\") pod \"kube-apiserver-172-236-108-24\" (UID: \"30a2e9dc8638c962ae30f4f6fdd4daea\") " pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:02.953594 kubelet[2729]: I0313 00:40:02.953584 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30a2e9dc8638c962ae30f4f6fdd4daea-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-108-24\" (UID: \"30a2e9dc8638c962ae30f4f6fdd4daea\") " pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:02.953695 kubelet[2729]: I0313 00:40:02.953603 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-ca-certs\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:02.953695 kubelet[2729]: I0313 00:40:02.953633 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-flexvolume-dir\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:02.953695 kubelet[2729]: I0313 00:40:02.953649 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:02.953695 kubelet[2729]: I0313 00:40:02.953665 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30a2e9dc8638c962ae30f4f6fdd4daea-k8s-certs\") pod \"kube-apiserver-172-236-108-24\" (UID: \"30a2e9dc8638c962ae30f4f6fdd4daea\") " pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:02.953695 kubelet[2729]: I0313 00:40:02.953683 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-k8s-certs\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:02.953809 kubelet[2729]: I0313 00:40:02.953705 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b2759688f07ab7a04febe620a512b11-kubeconfig\") pod \"kube-controller-manager-172-236-108-24\" (UID: \"5b2759688f07ab7a04febe620a512b11\") " pod="kube-system/kube-controller-manager-172-236-108-24" Mar 13 00:40:02.953809 kubelet[2729]: I0313 00:40:02.953719 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/625ccde0493eb355c8c4c4cd66360f5c-kubeconfig\") pod \"kube-scheduler-172-236-108-24\" (UID: \"625ccde0493eb355c8c4c4cd66360f5c\") " pod="kube-system/kube-scheduler-172-236-108-24" Mar 13 00:40:03.164190 kubelet[2729]: E0313 00:40:03.164052 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:03.166917 kubelet[2729]: E0313 00:40:03.166836 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:03.169252 kubelet[2729]: E0313 00:40:03.169025 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:03.727936 kubelet[2729]: I0313 00:40:03.727820 2729 apiserver.go:52] "Watching apiserver" Mar 13 00:40:03.753221 kubelet[2729]: I0313 00:40:03.753185 2729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:40:03.804647 kubelet[2729]: I0313 00:40:03.804605 2729 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:03.805468 kubelet[2729]: E0313 00:40:03.805447 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:03.805945 kubelet[2729]: E0313 00:40:03.805911 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:03.823504 kubelet[2729]: E0313 00:40:03.823467 2729 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-108-24\" already exists" pod="kube-system/kube-apiserver-172-236-108-24" Mar 13 00:40:03.823735 kubelet[2729]: E0313 00:40:03.823706 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:03.844575 kubelet[2729]: I0313 00:40:03.844289 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-108-24" podStartSLOduration=1.844266891 podStartE2EDuration="1.844266891s" podCreationTimestamp="2026-03-13 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:03.831437353 +0000 UTC m=+1.166990321" watchObservedRunningTime="2026-03-13 00:40:03.844266891 +0000 UTC m=+1.179819839" Mar 13 00:40:03.851558 kubelet[2729]: I0313 00:40:03.851518 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-108-24" podStartSLOduration=1.851469233 podStartE2EDuration="1.851469233s" podCreationTimestamp="2026-03-13 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:03.851208517 +0000 UTC m=+1.186761485" watchObservedRunningTime="2026-03-13 00:40:03.851469233 +0000 UTC m=+1.187022191" Mar 13 00:40:03.851813 kubelet[2729]: I0313 00:40:03.851644 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-108-24" podStartSLOduration=1.851610894 podStartE2EDuration="1.851610894s" podCreationTimestamp="2026-03-13 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:03.844568828 +0000 UTC m=+1.180121796" watchObservedRunningTime="2026-03-13 00:40:03.851610894 +0000 UTC m=+1.187163842" Mar 13 00:40:04.805652 kubelet[2729]: E0313 00:40:04.805375 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:04.805652 kubelet[2729]: E0313 00:40:04.805444 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:08.164359 kubelet[2729]: E0313 00:40:08.163833 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:08.500838 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 00:40:08.801708 kubelet[2729]: I0313 00:40:08.801509 2729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:40:08.802021 containerd[1551]: time="2026-03-13T00:40:08.801874440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:40:08.802794 kubelet[2729]: I0313 00:40:08.802004 2729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:40:08.811459 kubelet[2729]: E0313 00:40:08.811437 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:09.812989 kubelet[2729]: E0313 00:40:09.812928 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:09.847896 kubelet[2729]: E0313 00:40:09.847860 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:09.849654 systemd[1]: Created slice kubepods-besteffort-pod187bd2f1_e908_41ef_9e8a_0ebf0197deb5.slice - libcontainer container kubepods-besteffort-pod187bd2f1_e908_41ef_9e8a_0ebf0197deb5.slice. Mar 13 00:40:09.894749 kubelet[2729]: I0313 00:40:09.894712 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8v9\" (UniqueName: \"kubernetes.io/projected/187bd2f1-e908-41ef-9e8a-0ebf0197deb5-kube-api-access-wt8v9\") pod \"kube-proxy-dn65j\" (UID: \"187bd2f1-e908-41ef-9e8a-0ebf0197deb5\") " pod="kube-system/kube-proxy-dn65j" Mar 13 00:40:09.894987 kubelet[2729]: I0313 00:40:09.894972 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/187bd2f1-e908-41ef-9e8a-0ebf0197deb5-kube-proxy\") pod \"kube-proxy-dn65j\" (UID: \"187bd2f1-e908-41ef-9e8a-0ebf0197deb5\") " pod="kube-system/kube-proxy-dn65j" Mar 13 00:40:09.895089 kubelet[2729]: I0313 00:40:09.895078 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/187bd2f1-e908-41ef-9e8a-0ebf0197deb5-xtables-lock\") pod \"kube-proxy-dn65j\" (UID: \"187bd2f1-e908-41ef-9e8a-0ebf0197deb5\") " pod="kube-system/kube-proxy-dn65j" Mar 13 00:40:09.895226 kubelet[2729]: I0313 00:40:09.895213 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/187bd2f1-e908-41ef-9e8a-0ebf0197deb5-lib-modules\") pod \"kube-proxy-dn65j\" (UID: \"187bd2f1-e908-41ef-9e8a-0ebf0197deb5\") " pod="kube-system/kube-proxy-dn65j" Mar 13 00:40:09.903868 systemd[1]: Started sshd@7-172.236.108.24:22-198.235.24.125:49728.service - OpenSSH per-connection server daemon (198.235.24.125:49728). Mar 13 00:40:09.999112 systemd[1]: Created slice kubepods-besteffort-pod5222990a_5b92_4217_8c84_f902b93c1775.slice - libcontainer container kubepods-besteffort-pod5222990a_5b92_4217_8c84_f902b93c1775.slice. Mar 13 00:40:10.097599 kubelet[2729]: I0313 00:40:10.097474 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5222990a-5b92-4217-8c84-f902b93c1775-var-lib-calico\") pod \"tigera-operator-5588576f44-92bz8\" (UID: \"5222990a-5b92-4217-8c84-f902b93c1775\") " pod="tigera-operator/tigera-operator-5588576f44-92bz8" Mar 13 00:40:10.097599 kubelet[2729]: I0313 00:40:10.097517 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5jb\" (UniqueName: \"kubernetes.io/projected/5222990a-5b92-4217-8c84-f902b93c1775-kube-api-access-5k5jb\") pod \"tigera-operator-5588576f44-92bz8\" (UID: \"5222990a-5b92-4217-8c84-f902b93c1775\") " pod="tigera-operator/tigera-operator-5588576f44-92bz8" Mar 13 00:40:10.105219 kubelet[2729]: E0313 00:40:10.105186 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:10.162468 kubelet[2729]: E0313 00:40:10.162434 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:10.163066 containerd[1551]: time="2026-03-13T00:40:10.163016906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dn65j,Uid:187bd2f1-e908-41ef-9e8a-0ebf0197deb5,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:10.178165 containerd[1551]: time="2026-03-13T00:40:10.178105194Z" level=info msg="connecting to shim ab969d330524f61241d81d4dfb1a03c0134c0d993e8554a78e71e711c0ad91f4" address="unix:///run/containerd/s/7d1deec60887a182a64224646d8f996a5d9b910188fc72fffe8eead237cdcdae" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:10.200945 sshd[2778]: Connection closed by 198.235.24.125 port 49728 Mar 13 00:40:10.201917 systemd[1]: Started cri-containerd-ab969d330524f61241d81d4dfb1a03c0134c0d993e8554a78e71e711c0ad91f4.scope - libcontainer container ab969d330524f61241d81d4dfb1a03c0134c0d993e8554a78e71e711c0ad91f4. Mar 13 00:40:10.202432 systemd[1]: sshd@7-172.236.108.24:22-198.235.24.125:49728.service: Deactivated successfully. Mar 13 00:40:10.238143 containerd[1551]: time="2026-03-13T00:40:10.238104331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dn65j,Uid:187bd2f1-e908-41ef-9e8a-0ebf0197deb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab969d330524f61241d81d4dfb1a03c0134c0d993e8554a78e71e711c0ad91f4\"" Mar 13 00:40:10.238989 kubelet[2729]: E0313 00:40:10.238968 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:10.244655 containerd[1551]: time="2026-03-13T00:40:10.244321562Z" level=info msg="CreateContainer within sandbox \"ab969d330524f61241d81d4dfb1a03c0134c0d993e8554a78e71e711c0ad91f4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:40:10.259159 containerd[1551]: time="2026-03-13T00:40:10.259124938Z" level=info msg="Container cbe4b5578df7d7a9412714558c1b361ffb042ec160c2219628e3abd09cb27138: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:10.264038 containerd[1551]: time="2026-03-13T00:40:10.264013384Z" level=info msg="CreateContainer within sandbox \"ab969d330524f61241d81d4dfb1a03c0134c0d993e8554a78e71e711c0ad91f4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cbe4b5578df7d7a9412714558c1b361ffb042ec160c2219628e3abd09cb27138\"" Mar 13 00:40:10.264787 containerd[1551]: time="2026-03-13T00:40:10.264740300Z" level=info msg="StartContainer for \"cbe4b5578df7d7a9412714558c1b361ffb042ec160c2219628e3abd09cb27138\"" Mar 13 00:40:10.269648 containerd[1551]: time="2026-03-13T00:40:10.269566318Z" level=info msg="connecting to shim cbe4b5578df7d7a9412714558c1b361ffb042ec160c2219628e3abd09cb27138" address="unix:///run/containerd/s/7d1deec60887a182a64224646d8f996a5d9b910188fc72fffe8eead237cdcdae" protocol=ttrpc version=3 Mar 13 00:40:10.292748 systemd[1]: Started cri-containerd-cbe4b5578df7d7a9412714558c1b361ffb042ec160c2219628e3abd09cb27138.scope - libcontainer container cbe4b5578df7d7a9412714558c1b361ffb042ec160c2219628e3abd09cb27138. Mar 13 00:40:10.306644 containerd[1551]: time="2026-03-13T00:40:10.306585372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-92bz8,Uid:5222990a-5b92-4217-8c84-f902b93c1775,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:40:10.320226 containerd[1551]: time="2026-03-13T00:40:10.320126345Z" level=info msg="connecting to shim 600dbc90e441613140abfc57f2ef0d4ee0aec182c2f78d5ef8db0a902e4632bd" address="unix:///run/containerd/s/c0921dc45ad69919ad47914c42dc6bd60cae1b59a6a0c308aaeaa1f0e8f4f23c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:10.347938 systemd[1]: Started cri-containerd-600dbc90e441613140abfc57f2ef0d4ee0aec182c2f78d5ef8db0a902e4632bd.scope - libcontainer container 600dbc90e441613140abfc57f2ef0d4ee0aec182c2f78d5ef8db0a902e4632bd. Mar 13 00:40:10.369549 containerd[1551]: time="2026-03-13T00:40:10.369511828Z" level=info msg="StartContainer for \"cbe4b5578df7d7a9412714558c1b361ffb042ec160c2219628e3abd09cb27138\" returns successfully" Mar 13 00:40:10.400239 containerd[1551]: time="2026-03-13T00:40:10.400205016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-92bz8,Uid:5222990a-5b92-4217-8c84-f902b93c1775,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"600dbc90e441613140abfc57f2ef0d4ee0aec182c2f78d5ef8db0a902e4632bd\"" Mar 13 00:40:10.402084 containerd[1551]: time="2026-03-13T00:40:10.402056067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:40:10.818721 kubelet[2729]: E0313 00:40:10.817796 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:10.824028 kubelet[2729]: E0313 00:40:10.823967 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:10.824403 kubelet[2729]: E0313 00:40:10.824387 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:10.844362 kubelet[2729]: I0313 00:40:10.844037 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dn65j" podStartSLOduration=1.8440233080000001 podStartE2EDuration="1.844023308s" podCreationTimestamp="2026-03-13 00:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:10.830961206 +0000 UTC m=+8.166514174" watchObservedRunningTime="2026-03-13 00:40:10.844023308 +0000 UTC m=+8.179576266" Mar 13 00:40:11.270349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926871041.mount: Deactivated successfully. Mar 13 00:40:12.934862 containerd[1551]: time="2026-03-13T00:40:12.934814289Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:12.935776 containerd[1551]: time="2026-03-13T00:40:12.935633662Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:40:12.936385 containerd[1551]: time="2026-03-13T00:40:12.936359051Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:12.937871 containerd[1551]: time="2026-03-13T00:40:12.937844269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:12.938551 containerd[1551]: time="2026-03-13T00:40:12.938528663Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.536441799s" Mar 13 00:40:12.938645 containerd[1551]: time="2026-03-13T00:40:12.938608551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:40:12.942749 containerd[1551]: time="2026-03-13T00:40:12.942694436Z" level=info msg="CreateContainer within sandbox \"600dbc90e441613140abfc57f2ef0d4ee0aec182c2f78d5ef8db0a902e4632bd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:40:12.949917 containerd[1551]: time="2026-03-13T00:40:12.948999366Z" level=info msg="Container e8327551821241e9a41d643c275c58e2f2638eeac9d1efdf493154ac764df196: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:12.956732 containerd[1551]: time="2026-03-13T00:40:12.956709165Z" level=info msg="CreateContainer within sandbox \"600dbc90e441613140abfc57f2ef0d4ee0aec182c2f78d5ef8db0a902e4632bd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e8327551821241e9a41d643c275c58e2f2638eeac9d1efdf493154ac764df196\"" Mar 13 00:40:12.957104 containerd[1551]: time="2026-03-13T00:40:12.957086541Z" level=info msg="StartContainer for \"e8327551821241e9a41d643c275c58e2f2638eeac9d1efdf493154ac764df196\"" Mar 13 00:40:12.958376 containerd[1551]: time="2026-03-13T00:40:12.958318749Z" level=info msg="connecting to shim e8327551821241e9a41d643c275c58e2f2638eeac9d1efdf493154ac764df196" address="unix:///run/containerd/s/c0921dc45ad69919ad47914c42dc6bd60cae1b59a6a0c308aaeaa1f0e8f4f23c" protocol=ttrpc version=3 Mar 13 00:40:12.981732 systemd[1]: Started cri-containerd-e8327551821241e9a41d643c275c58e2f2638eeac9d1efdf493154ac764df196.scope - libcontainer container e8327551821241e9a41d643c275c58e2f2638eeac9d1efdf493154ac764df196. Mar 13 00:40:13.011505 containerd[1551]: time="2026-03-13T00:40:13.011440323Z" level=info msg="StartContainer for \"e8327551821241e9a41d643c275c58e2f2638eeac9d1efdf493154ac764df196\" returns successfully" Mar 13 00:40:18.454312 sudo[1792]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:18.477106 sshd[1791]: Connection closed by 68.220.241.50 port 42852 Mar 13 00:40:18.476016 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:18.482182 systemd[1]: sshd@6-172.236.108.24:22-68.220.241.50:42852.service: Deactivated successfully. Mar 13 00:40:18.482972 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:40:18.486727 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:40:18.487038 systemd[1]: session-7.scope: Consumed 4.656s CPU time, 226.5M memory peak. Mar 13 00:40:18.493281 systemd-logind[1523]: Removed session 7. Mar 13 00:40:20.477524 kubelet[2729]: I0313 00:40:20.477477 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-92bz8" podStartSLOduration=8.939619895 podStartE2EDuration="11.47746178s" podCreationTimestamp="2026-03-13 00:40:09 +0000 UTC" firstStartedPulling="2026-03-13 00:40:10.401529247 +0000 UTC m=+7.737082205" lastFinishedPulling="2026-03-13 00:40:12.939371132 +0000 UTC m=+10.274924090" observedRunningTime="2026-03-13 00:40:13.839299855 +0000 UTC m=+11.174852823" watchObservedRunningTime="2026-03-13 00:40:20.47746178 +0000 UTC m=+17.813014728" Mar 13 00:40:20.489365 systemd[1]: Created slice kubepods-besteffort-podc1d59576_011e_4313_b0bf_592ba26b1ef0.slice - libcontainer container kubepods-besteffort-podc1d59576_011e_4313_b0bf_592ba26b1ef0.slice. Mar 13 00:40:20.550112 systemd[1]: Created slice kubepods-besteffort-pod8376b4fb_35a0_4af2_b225_f611180b931e.slice - libcontainer container kubepods-besteffort-pod8376b4fb_35a0_4af2_b225_f611180b931e.slice. Mar 13 00:40:20.565252 kubelet[2729]: I0313 00:40:20.565209 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8376b4fb-35a0-4af2-b225-f611180b931e-tigera-ca-bundle\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565252 kubelet[2729]: I0313 00:40:20.565240 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-flexvol-driver-host\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565252 kubelet[2729]: I0313 00:40:20.565259 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-lib-modules\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565408 kubelet[2729]: I0313 00:40:20.565275 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8376b4fb-35a0-4af2-b225-f611180b931e-node-certs\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565408 kubelet[2729]: I0313 00:40:20.565290 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c1d59576-011e-4313-b0bf-592ba26b1ef0-typha-certs\") pod \"calico-typha-5d5694fb4c-8ggqt\" (UID: \"c1d59576-011e-4313-b0bf-592ba26b1ef0\") " pod="calico-system/calico-typha-5d5694fb4c-8ggqt" Mar 13 00:40:20.565408 kubelet[2729]: I0313 00:40:20.565304 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-cni-bin-dir\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565408 kubelet[2729]: I0313 00:40:20.565319 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-cni-log-dir\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565408 kubelet[2729]: I0313 00:40:20.565333 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-nodeproc\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565529 kubelet[2729]: I0313 00:40:20.565349 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-var-lib-calico\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565529 kubelet[2729]: I0313 00:40:20.565364 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-bpffs\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565529 kubelet[2729]: I0313 00:40:20.565377 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7z4p\" (UniqueName: \"kubernetes.io/projected/c1d59576-011e-4313-b0bf-592ba26b1ef0-kube-api-access-g7z4p\") pod \"calico-typha-5d5694fb4c-8ggqt\" (UID: \"c1d59576-011e-4313-b0bf-592ba26b1ef0\") " pod="calico-system/calico-typha-5d5694fb4c-8ggqt" Mar 13 00:40:20.565529 kubelet[2729]: I0313 00:40:20.565393 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-xtables-lock\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.565529 kubelet[2729]: I0313 00:40:20.565409 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-cni-net-dir\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.566080 kubelet[2729]: I0313 00:40:20.565425 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d59576-011e-4313-b0bf-592ba26b1ef0-tigera-ca-bundle\") pod \"calico-typha-5d5694fb4c-8ggqt\" (UID: \"c1d59576-011e-4313-b0bf-592ba26b1ef0\") " pod="calico-system/calico-typha-5d5694fb4c-8ggqt" Mar 13 00:40:20.566080 kubelet[2729]: I0313 00:40:20.565445 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-var-run-calico\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.566080 kubelet[2729]: I0313 00:40:20.565461 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-policysync\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.566080 kubelet[2729]: I0313 00:40:20.565475 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8376b4fb-35a0-4af2-b225-f611180b931e-sys-fs\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.566080 kubelet[2729]: I0313 00:40:20.565493 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55cd5\" (UniqueName: \"kubernetes.io/projected/8376b4fb-35a0-4af2-b225-f611180b931e-kube-api-access-55cd5\") pod \"calico-node-4j2bd\" (UID: \"8376b4fb-35a0-4af2-b225-f611180b931e\") " pod="calico-system/calico-node-4j2bd" Mar 13 00:40:20.647665 kubelet[2729]: E0313 00:40:20.644837 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8v2lv" podUID="648ae89e-362f-4fdb-8142-d604f3582645" Mar 13 00:40:20.666606 kubelet[2729]: I0313 00:40:20.666554 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/648ae89e-362f-4fdb-8142-d604f3582645-kubelet-dir\") pod \"csi-node-driver-8v2lv\" (UID: \"648ae89e-362f-4fdb-8142-d604f3582645\") " pod="calico-system/csi-node-driver-8v2lv" Mar 13 00:40:20.667311 kubelet[2729]: I0313 00:40:20.667290 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/648ae89e-362f-4fdb-8142-d604f3582645-socket-dir\") pod \"csi-node-driver-8v2lv\" (UID: \"648ae89e-362f-4fdb-8142-d604f3582645\") " pod="calico-system/csi-node-driver-8v2lv" Mar 13 00:40:20.667371 kubelet[2729]: I0313 00:40:20.667314 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/648ae89e-362f-4fdb-8142-d604f3582645-varrun\") pod \"csi-node-driver-8v2lv\" (UID: \"648ae89e-362f-4fdb-8142-d604f3582645\") " pod="calico-system/csi-node-driver-8v2lv" Mar 13 00:40:20.667371 kubelet[2729]: I0313 00:40:20.667338 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ctg\" (UniqueName: \"kubernetes.io/projected/648ae89e-362f-4fdb-8142-d604f3582645-kube-api-access-f4ctg\") pod \"csi-node-driver-8v2lv\" (UID: \"648ae89e-362f-4fdb-8142-d604f3582645\") " pod="calico-system/csi-node-driver-8v2lv" Mar 13 00:40:20.667420 kubelet[2729]: I0313 00:40:20.667372 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/648ae89e-362f-4fdb-8142-d604f3582645-registration-dir\") pod \"csi-node-driver-8v2lv\" (UID: \"648ae89e-362f-4fdb-8142-d604f3582645\") " pod="calico-system/csi-node-driver-8v2lv" Mar 13 00:40:20.679841 kubelet[2729]: E0313 00:40:20.679795 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.679959 kubelet[2729]: W0313 00:40:20.679820 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.679959 kubelet[2729]: E0313 00:40:20.679945 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.681675 kubelet[2729]: E0313 00:40:20.680303 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.681766 kubelet[2729]: W0313 00:40:20.681751 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.681853 kubelet[2729]: E0313 00:40:20.681831 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.695376 kubelet[2729]: E0313 00:40:20.695340 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.695376 kubelet[2729]: W0313 00:40:20.695356 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.695376 kubelet[2729]: E0313 00:40:20.695369 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.706545 kubelet[2729]: E0313 00:40:20.706526 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.706709 kubelet[2729]: W0313 00:40:20.706666 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.706709 kubelet[2729]: E0313 00:40:20.706685 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.769282 kubelet[2729]: E0313 00:40:20.769163 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.769282 kubelet[2729]: W0313 00:40:20.769181 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.769282 kubelet[2729]: E0313 00:40:20.769198 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.770056 kubelet[2729]: E0313 00:40:20.769906 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.770056 kubelet[2729]: W0313 00:40:20.769917 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.770056 kubelet[2729]: E0313 00:40:20.769926 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.770327 kubelet[2729]: E0313 00:40:20.770154 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.770327 kubelet[2729]: W0313 00:40:20.770162 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.770327 kubelet[2729]: E0313 00:40:20.770170 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.770483 kubelet[2729]: E0313 00:40:20.770458 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.770562 kubelet[2729]: W0313 00:40:20.770537 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.770562 kubelet[2729]: E0313 00:40:20.770550 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.772088 kubelet[2729]: E0313 00:40:20.772026 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.772195 kubelet[2729]: W0313 00:40:20.772168 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.772195 kubelet[2729]: E0313 00:40:20.772183 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.772502 kubelet[2729]: E0313 00:40:20.772472 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.772502 kubelet[2729]: W0313 00:40:20.772483 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.772502 kubelet[2729]: E0313 00:40:20.772491 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.773018 kubelet[2729]: E0313 00:40:20.772989 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.773018 kubelet[2729]: W0313 00:40:20.772999 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.773018 kubelet[2729]: E0313 00:40:20.773007 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.773340 kubelet[2729]: E0313 00:40:20.773312 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.773340 kubelet[2729]: W0313 00:40:20.773321 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.773340 kubelet[2729]: E0313 00:40:20.773329 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.774646 kubelet[2729]: E0313 00:40:20.773649 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.774646 kubelet[2729]: W0313 00:40:20.773659 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.774646 kubelet[2729]: E0313 00:40:20.773667 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.775243 kubelet[2729]: E0313 00:40:20.775209 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.775243 kubelet[2729]: W0313 00:40:20.775219 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.775243 kubelet[2729]: E0313 00:40:20.775227 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.775579 kubelet[2729]: E0313 00:40:20.775550 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.775579 kubelet[2729]: W0313 00:40:20.775559 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.775579 kubelet[2729]: E0313 00:40:20.775568 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.776207 kubelet[2729]: E0313 00:40:20.776035 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.776207 kubelet[2729]: W0313 00:40:20.776043 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.776207 kubelet[2729]: E0313 00:40:20.776051 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.776354 kubelet[2729]: E0313 00:40:20.776344 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.776409 kubelet[2729]: W0313 00:40:20.776398 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.776461 kubelet[2729]: E0313 00:40:20.776451 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.777596 kubelet[2729]: E0313 00:40:20.777565 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.777596 kubelet[2729]: W0313 00:40:20.777576 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.777596 kubelet[2729]: E0313 00:40:20.777585 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.778013 kubelet[2729]: E0313 00:40:20.777982 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.778013 kubelet[2729]: W0313 00:40:20.777992 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.778013 kubelet[2729]: E0313 00:40:20.778002 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.778399 kubelet[2729]: E0313 00:40:20.778388 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.778484 kubelet[2729]: W0313 00:40:20.778441 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.778484 kubelet[2729]: E0313 00:40:20.778473 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.778848 kubelet[2729]: E0313 00:40:20.778816 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.778848 kubelet[2729]: W0313 00:40:20.778826 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.778848 kubelet[2729]: E0313 00:40:20.778835 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.779957 kubelet[2729]: E0313 00:40:20.779946 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.780044 kubelet[2729]: W0313 00:40:20.780011 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.780044 kubelet[2729]: E0313 00:40:20.780024 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.780300 kubelet[2729]: E0313 00:40:20.780290 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.780359 kubelet[2729]: W0313 00:40:20.780348 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.780401 kubelet[2729]: E0313 00:40:20.780392 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.780680 kubelet[2729]: E0313 00:40:20.780666 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.780755 kubelet[2729]: W0313 00:40:20.780729 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.780755 kubelet[2729]: E0313 00:40:20.780741 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.781713 kubelet[2729]: E0313 00:40:20.781682 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.781713 kubelet[2729]: W0313 00:40:20.781693 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.781713 kubelet[2729]: E0313 00:40:20.781702 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.782071 kubelet[2729]: E0313 00:40:20.782040 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.782071 kubelet[2729]: W0313 00:40:20.782051 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.782071 kubelet[2729]: E0313 00:40:20.782059 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.782393 kubelet[2729]: E0313 00:40:20.782364 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.782393 kubelet[2729]: W0313 00:40:20.782374 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.782393 kubelet[2729]: E0313 00:40:20.782383 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.782775 kubelet[2729]: E0313 00:40:20.782746 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.782775 kubelet[2729]: W0313 00:40:20.782756 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.782775 kubelet[2729]: E0313 00:40:20.782764 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.784003 kubelet[2729]: E0313 00:40:20.783964 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.784003 kubelet[2729]: W0313 00:40:20.783975 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.784003 kubelet[2729]: E0313 00:40:20.783985 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.797802 kubelet[2729]: E0313 00:40:20.797737 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:20.797802 kubelet[2729]: W0313 00:40:20.797756 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:20.797802 kubelet[2729]: E0313 00:40:20.797771 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:20.800096 kubelet[2729]: E0313 00:40:20.799190 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:20.800988 containerd[1551]: time="2026-03-13T00:40:20.800964177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5694fb4c-8ggqt,Uid:c1d59576-011e-4313-b0bf-592ba26b1ef0,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:20.819362 containerd[1551]: time="2026-03-13T00:40:20.819292484Z" level=info msg="connecting to shim 1b5a7bf8430e765ab502d06c00f442a74001b633b44430c56259aa2552a04daa" address="unix:///run/containerd/s/d92c183f12745597e5e718a67e067721fce5b2cbd3ab6eebe251a5dc4f5d1300" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:20.842750 systemd[1]: Started cri-containerd-1b5a7bf8430e765ab502d06c00f442a74001b633b44430c56259aa2552a04daa.scope - libcontainer container 1b5a7bf8430e765ab502d06c00f442a74001b633b44430c56259aa2552a04daa. Mar 13 00:40:20.856260 containerd[1551]: time="2026-03-13T00:40:20.856228906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4j2bd,Uid:8376b4fb-35a0-4af2-b225-f611180b931e,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:20.871211 containerd[1551]: time="2026-03-13T00:40:20.871180172Z" level=info msg="connecting to shim 80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351" address="unix:///run/containerd/s/6cae6bf5fdb83601027ecc5bf82e8822a5abb4f2c663de19abca82e94afe31fe" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:20.896862 systemd[1]: Started cri-containerd-80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351.scope - libcontainer container 80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351. Mar 13 00:40:20.914471 containerd[1551]: time="2026-03-13T00:40:20.914443542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5694fb4c-8ggqt,Uid:c1d59576-011e-4313-b0bf-592ba26b1ef0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b5a7bf8430e765ab502d06c00f442a74001b633b44430c56259aa2552a04daa\"" Mar 13 00:40:20.915320 kubelet[2729]: E0313 00:40:20.915302 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:20.916499 containerd[1551]: time="2026-03-13T00:40:20.916307476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:40:20.942073 containerd[1551]: time="2026-03-13T00:40:20.942044629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4j2bd,Uid:8376b4fb-35a0-4af2-b225-f611180b931e,Namespace:calico-system,Attempt:0,} returns sandbox id \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\"" Mar 13 00:40:22.194670 containerd[1551]: time="2026-03-13T00:40:22.194577136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:22.196726 containerd[1551]: time="2026-03-13T00:40:22.196663693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:40:22.199048 containerd[1551]: time="2026-03-13T00:40:22.199008154Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:22.203063 containerd[1551]: time="2026-03-13T00:40:22.203028929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:22.203787 containerd[1551]: time="2026-03-13T00:40:22.203748651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.287415847s" Mar 13 00:40:22.203787 containerd[1551]: time="2026-03-13T00:40:22.203780201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:40:22.206497 containerd[1551]: time="2026-03-13T00:40:22.206455994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:40:22.218739 containerd[1551]: time="2026-03-13T00:40:22.218672510Z" level=info msg="CreateContainer within sandbox \"1b5a7bf8430e765ab502d06c00f442a74001b633b44430c56259aa2552a04daa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:40:22.231592 containerd[1551]: time="2026-03-13T00:40:22.230880289Z" level=info msg="Container 7d14a10200459d226b41a9f77d92a9058c45a41f54a677c9bb508bd84fadc4c0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:22.236305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857480564.mount: Deactivated successfully. Mar 13 00:40:22.238803 containerd[1551]: time="2026-03-13T00:40:22.238767372Z" level=info msg="CreateContainer within sandbox \"1b5a7bf8430e765ab502d06c00f442a74001b633b44430c56259aa2552a04daa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d14a10200459d226b41a9f77d92a9058c45a41f54a677c9bb508bd84fadc4c0\"" Mar 13 00:40:22.239542 containerd[1551]: time="2026-03-13T00:40:22.239512578Z" level=info msg="StartContainer for \"7d14a10200459d226b41a9f77d92a9058c45a41f54a677c9bb508bd84fadc4c0\"" Mar 13 00:40:22.240848 containerd[1551]: time="2026-03-13T00:40:22.240827461Z" level=info msg="connecting to shim 7d14a10200459d226b41a9f77d92a9058c45a41f54a677c9bb508bd84fadc4c0" address="unix:///run/containerd/s/d92c183f12745597e5e718a67e067721fce5b2cbd3ab6eebe251a5dc4f5d1300" protocol=ttrpc version=3 Mar 13 00:40:22.261770 systemd[1]: Started cri-containerd-7d14a10200459d226b41a9f77d92a9058c45a41f54a677c9bb508bd84fadc4c0.scope - libcontainer container 7d14a10200459d226b41a9f77d92a9058c45a41f54a677c9bb508bd84fadc4c0. Mar 13 00:40:22.322080 containerd[1551]: time="2026-03-13T00:40:22.321884561Z" level=info msg="StartContainer for \"7d14a10200459d226b41a9f77d92a9058c45a41f54a677c9bb508bd84fadc4c0\" returns successfully" Mar 13 00:40:22.668098 update_engine[1525]: I20260313 00:40:22.668026 1525 update_attempter.cc:509] Updating boot flags... Mar 13 00:40:22.773442 kubelet[2729]: E0313 00:40:22.772203 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8v2lv" podUID="648ae89e-362f-4fdb-8142-d604f3582645" Mar 13 00:40:22.863342 kubelet[2729]: E0313 00:40:22.863087 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:22.869385 kubelet[2729]: E0313 00:40:22.867219 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.869385 kubelet[2729]: W0313 00:40:22.867273 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.869385 kubelet[2729]: E0313 00:40:22.867290 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.869385 kubelet[2729]: E0313 00:40:22.867756 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.869385 kubelet[2729]: W0313 00:40:22.867765 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.869385 kubelet[2729]: E0313 00:40:22.867774 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.869385 kubelet[2729]: E0313 00:40:22.868177 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.869385 kubelet[2729]: W0313 00:40:22.868185 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.869385 kubelet[2729]: E0313 00:40:22.868218 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.870436 kubelet[2729]: E0313 00:40:22.869697 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.870436 kubelet[2729]: W0313 00:40:22.869734 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.870436 kubelet[2729]: E0313 00:40:22.869744 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.870436 kubelet[2729]: E0313 00:40:22.869965 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.870436 kubelet[2729]: W0313 00:40:22.869973 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.870436 kubelet[2729]: E0313 00:40:22.869981 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.870436 kubelet[2729]: E0313 00:40:22.870172 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.870436 kubelet[2729]: W0313 00:40:22.870180 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.870436 kubelet[2729]: E0313 00:40:22.870188 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.870436 kubelet[2729]: E0313 00:40:22.870416 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.871255 kubelet[2729]: W0313 00:40:22.870423 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.871255 kubelet[2729]: E0313 00:40:22.870451 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.871255 kubelet[2729]: E0313 00:40:22.870715 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.871255 kubelet[2729]: W0313 00:40:22.870722 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.871255 kubelet[2729]: E0313 00:40:22.870730 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.871255 kubelet[2729]: E0313 00:40:22.870964 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.871255 kubelet[2729]: W0313 00:40:22.870971 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.871255 kubelet[2729]: E0313 00:40:22.870979 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.871255 kubelet[2729]: E0313 00:40:22.871212 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.871255 kubelet[2729]: W0313 00:40:22.871219 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.872321 kubelet[2729]: E0313 00:40:22.871227 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.872321 kubelet[2729]: E0313 00:40:22.871516 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.872321 kubelet[2729]: W0313 00:40:22.871524 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.872321 kubelet[2729]: E0313 00:40:22.871532 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.872321 kubelet[2729]: E0313 00:40:22.872204 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.872321 kubelet[2729]: W0313 00:40:22.872213 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.872321 kubelet[2729]: E0313 00:40:22.872221 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.874798 kubelet[2729]: E0313 00:40:22.874744 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.874798 kubelet[2729]: W0313 00:40:22.874762 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.874798 kubelet[2729]: E0313 00:40:22.874794 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.878637 kubelet[2729]: E0313 00:40:22.878190 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.878637 kubelet[2729]: W0313 00:40:22.878207 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.878637 kubelet[2729]: E0313 00:40:22.878216 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.878637 kubelet[2729]: E0313 00:40:22.878452 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.878637 kubelet[2729]: W0313 00:40:22.878460 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.878637 kubelet[2729]: E0313 00:40:22.878468 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.889917 kubelet[2729]: E0313 00:40:22.889888 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.889917 kubelet[2729]: W0313 00:40:22.889907 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.889917 kubelet[2729]: E0313 00:40:22.889918 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.890637 kubelet[2729]: E0313 00:40:22.890221 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.890637 kubelet[2729]: W0313 00:40:22.890234 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.890637 kubelet[2729]: E0313 00:40:22.890243 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.891780 kubelet[2729]: E0313 00:40:22.891756 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.891780 kubelet[2729]: W0313 00:40:22.891772 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.891780 kubelet[2729]: E0313 00:40:22.891782 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.893455 kubelet[2729]: E0313 00:40:22.893425 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.893455 kubelet[2729]: W0313 00:40:22.893442 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.893532 kubelet[2729]: E0313 00:40:22.893463 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.893866 kubelet[2729]: E0313 00:40:22.893719 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.893866 kubelet[2729]: W0313 00:40:22.893731 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.893866 kubelet[2729]: E0313 00:40:22.893740 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.894761 kubelet[2729]: E0313 00:40:22.894320 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.894761 kubelet[2729]: W0313 00:40:22.894331 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.894761 kubelet[2729]: E0313 00:40:22.894352 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.895039 kubelet[2729]: E0313 00:40:22.894943 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.895039 kubelet[2729]: W0313 00:40:22.894955 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.895095 kubelet[2729]: E0313 00:40:22.895076 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.900020 kubelet[2729]: E0313 00:40:22.899761 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.900020 kubelet[2729]: W0313 00:40:22.899779 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.900020 kubelet[2729]: E0313 00:40:22.899789 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.903075 kubelet[2729]: E0313 00:40:22.902116 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.903075 kubelet[2729]: W0313 00:40:22.902133 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.903075 kubelet[2729]: E0313 00:40:22.902260 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.903075 kubelet[2729]: E0313 00:40:22.902734 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.903075 kubelet[2729]: W0313 00:40:22.902742 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.903075 kubelet[2729]: E0313 00:40:22.902751 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.903785 kubelet[2729]: E0313 00:40:22.903724 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.903785 kubelet[2729]: W0313 00:40:22.903740 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.904754 kubelet[2729]: E0313 00:40:22.904654 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.905517 kubelet[2729]: E0313 00:40:22.905126 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.905517 kubelet[2729]: W0313 00:40:22.905166 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.905517 kubelet[2729]: E0313 00:40:22.905175 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.907142 kubelet[2729]: E0313 00:40:22.906986 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.907142 kubelet[2729]: W0313 00:40:22.906999 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.907142 kubelet[2729]: E0313 00:40:22.907022 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.907234 kubelet[2729]: E0313 00:40:22.907208 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.907234 kubelet[2729]: W0313 00:40:22.907216 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.907234 kubelet[2729]: E0313 00:40:22.907225 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.907536 kubelet[2729]: E0313 00:40:22.907406 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.907536 kubelet[2729]: W0313 00:40:22.907416 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.907536 kubelet[2729]: E0313 00:40:22.907424 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.910546 kubelet[2729]: E0313 00:40:22.910266 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.910546 kubelet[2729]: W0313 00:40:22.910385 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.910546 kubelet[2729]: E0313 00:40:22.910398 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.914087 kubelet[2729]: E0313 00:40:22.913195 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.914087 kubelet[2729]: W0313 00:40:22.913208 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.914087 kubelet[2729]: E0313 00:40:22.913218 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:22.931804 kubelet[2729]: E0313 00:40:22.931712 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:22.931804 kubelet[2729]: W0313 00:40:22.931750 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:22.931804 kubelet[2729]: E0313 00:40:22.931763 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:23.027353 containerd[1551]: time="2026-03-13T00:40:23.027307855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:23.029111 containerd[1551]: time="2026-03-13T00:40:23.028974309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:40:23.029581 containerd[1551]: time="2026-03-13T00:40:23.029550970Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:23.036476 containerd[1551]: time="2026-03-13T00:40:23.035535363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:23.042276 containerd[1551]: time="2026-03-13T00:40:23.041660691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 835.172286ms" Mar 13 00:40:23.042276 containerd[1551]: time="2026-03-13T00:40:23.041692070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:40:23.050180 containerd[1551]: time="2026-03-13T00:40:23.050144145Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:40:23.065967 containerd[1551]: time="2026-03-13T00:40:23.065933896Z" level=info msg="Container 957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:23.066578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318328192.mount: Deactivated successfully. Mar 13 00:40:23.082473 containerd[1551]: time="2026-03-13T00:40:23.082132986Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c\"" Mar 13 00:40:23.084120 containerd[1551]: time="2026-03-13T00:40:23.083842937Z" level=info msg="StartContainer for \"957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c\"" Mar 13 00:40:23.089942 containerd[1551]: time="2026-03-13T00:40:23.088600243Z" level=info msg="connecting to shim 957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c" address="unix:///run/containerd/s/6cae6bf5fdb83601027ecc5bf82e8822a5abb4f2c663de19abca82e94afe31fe" protocol=ttrpc version=3 Mar 13 00:40:23.143748 systemd[1]: Started cri-containerd-957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c.scope - libcontainer container 957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c. Mar 13 00:40:23.229482 containerd[1551]: time="2026-03-13T00:40:23.228799691Z" level=info msg="StartContainer for \"957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c\" returns successfully" Mar 13 00:40:23.245992 systemd[1]: cri-containerd-957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c.scope: Deactivated successfully. Mar 13 00:40:23.250094 containerd[1551]: time="2026-03-13T00:40:23.250065195Z" level=info msg="received container exit event container_id:\"957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c\" id:\"957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c\" pid:3383 exited_at:{seconds:1773362423 nanos:249761230}" Mar 13 00:40:23.277299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-957addc1a905a9b9e9c323a954a9782ef7ac57ab325737f9e8b11ecdeba05f3c-rootfs.mount: Deactivated successfully. Mar 13 00:40:23.870114 kubelet[2729]: I0313 00:40:23.870061 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:23.871213 kubelet[2729]: E0313 00:40:23.871013 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:23.874886 containerd[1551]: time="2026-03-13T00:40:23.874856929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:40:23.898521 kubelet[2729]: I0313 00:40:23.898464 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d5694fb4c-8ggqt" podStartSLOduration=2.608357584 podStartE2EDuration="3.898451931s" podCreationTimestamp="2026-03-13 00:40:20 +0000 UTC" firstStartedPulling="2026-03-13 00:40:20.916105633 +0000 UTC m=+18.251658591" lastFinishedPulling="2026-03-13 00:40:22.20619999 +0000 UTC m=+19.541752938" observedRunningTime="2026-03-13 00:40:22.885374559 +0000 UTC m=+20.220927517" watchObservedRunningTime="2026-03-13 00:40:23.898451931 +0000 UTC m=+21.234004889" Mar 13 00:40:24.760313 kubelet[2729]: E0313 00:40:24.760261 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8v2lv" podUID="648ae89e-362f-4fdb-8142-d604f3582645" Mar 13 00:40:26.761956 kubelet[2729]: E0313 00:40:26.761910 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8v2lv" podUID="648ae89e-362f-4fdb-8142-d604f3582645" Mar 13 00:40:27.577267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503169668.mount: Deactivated successfully. Mar 13 00:40:27.606371 containerd[1551]: time="2026-03-13T00:40:27.606324524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:27.607143 containerd[1551]: time="2026-03-13T00:40:27.607045435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:40:27.607685 containerd[1551]: time="2026-03-13T00:40:27.607656730Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:27.609124 containerd[1551]: time="2026-03-13T00:40:27.609097321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:27.610217 containerd[1551]: time="2026-03-13T00:40:27.610184865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.735189041s" Mar 13 00:40:27.610276 containerd[1551]: time="2026-03-13T00:40:27.610225505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:40:27.616473 containerd[1551]: time="2026-03-13T00:40:27.616303444Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:40:27.627001 containerd[1551]: time="2026-03-13T00:40:27.625949257Z" level=info msg="Container 0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:27.630379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176808698.mount: Deactivated successfully. Mar 13 00:40:27.635248 containerd[1551]: time="2026-03-13T00:40:27.635208856Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668\"" Mar 13 00:40:27.635914 containerd[1551]: time="2026-03-13T00:40:27.635841903Z" level=info msg="StartContainer for \"0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668\"" Mar 13 00:40:27.637392 containerd[1551]: time="2026-03-13T00:40:27.637362963Z" level=info msg="connecting to shim 0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668" address="unix:///run/containerd/s/6cae6bf5fdb83601027ecc5bf82e8822a5abb4f2c663de19abca82e94afe31fe" protocol=ttrpc version=3 Mar 13 00:40:27.661746 systemd[1]: Started cri-containerd-0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668.scope - libcontainer container 0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668. Mar 13 00:40:27.739142 containerd[1551]: time="2026-03-13T00:40:27.739086262Z" level=info msg="StartContainer for \"0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668\" returns successfully" Mar 13 00:40:27.779541 systemd[1]: cri-containerd-0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668.scope: Deactivated successfully. Mar 13 00:40:27.783822 containerd[1551]: time="2026-03-13T00:40:27.783772924Z" level=info msg="received container exit event container_id:\"0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668\" id:\"0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668\" pid:3438 exited_at:{seconds:1773362427 nanos:783327232}" Mar 13 00:40:28.576751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c27e2d9c50f0e9b8c9441ee25bbac5dafac2a0992bcf4c3492e925b75c22668-rootfs.mount: Deactivated successfully. Mar 13 00:40:28.757654 kubelet[2729]: E0313 00:40:28.757304 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8v2lv" podUID="648ae89e-362f-4fdb-8142-d604f3582645" Mar 13 00:40:28.895451 containerd[1551]: time="2026-03-13T00:40:28.894994028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:40:30.649944 containerd[1551]: time="2026-03-13T00:40:30.649878648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:30.650801 containerd[1551]: time="2026-03-13T00:40:30.650768955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:40:30.651426 containerd[1551]: time="2026-03-13T00:40:30.651381702Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:30.653130 containerd[1551]: time="2026-03-13T00:40:30.653064364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:30.653941 containerd[1551]: time="2026-03-13T00:40:30.653880158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.758803321s" Mar 13 00:40:30.653941 containerd[1551]: time="2026-03-13T00:40:30.653931321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:40:30.659130 containerd[1551]: time="2026-03-13T00:40:30.659008950Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:40:30.667760 containerd[1551]: time="2026-03-13T00:40:30.667725281Z" level=info msg="Container e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:30.679931 containerd[1551]: time="2026-03-13T00:40:30.679904128Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d\"" Mar 13 00:40:30.680414 containerd[1551]: time="2026-03-13T00:40:30.680345301Z" level=info msg="StartContainer for \"e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d\"" Mar 13 00:40:30.682115 containerd[1551]: time="2026-03-13T00:40:30.682069511Z" level=info msg="connecting to shim e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d" address="unix:///run/containerd/s/6cae6bf5fdb83601027ecc5bf82e8822a5abb4f2c663de19abca82e94afe31fe" protocol=ttrpc version=3 Mar 13 00:40:30.704774 systemd[1]: Started cri-containerd-e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d.scope - libcontainer container e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d. Mar 13 00:40:30.757038 kubelet[2729]: E0313 00:40:30.757004 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8v2lv" podUID="648ae89e-362f-4fdb-8142-d604f3582645" Mar 13 00:40:30.804591 containerd[1551]: time="2026-03-13T00:40:30.804550829Z" level=info msg="StartContainer for \"e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d\" returns successfully" Mar 13 00:40:31.276103 containerd[1551]: time="2026-03-13T00:40:31.276039307Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:40:31.279424 systemd[1]: cri-containerd-e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d.scope: Deactivated successfully. Mar 13 00:40:31.279744 systemd[1]: cri-containerd-e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d.scope: Consumed 477ms CPU time, 191.2M memory peak, 796K read from disk, 177M written to disk. Mar 13 00:40:31.280574 containerd[1551]: time="2026-03-13T00:40:31.280548183Z" level=info msg="received container exit event container_id:\"e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d\" id:\"e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d\" pid:3499 exited_at:{seconds:1773362431 nanos:279212457}" Mar 13 00:40:31.299057 kubelet[2729]: I0313 00:40:31.299036 2729 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:40:31.341085 systemd[1]: Created slice kubepods-burstable-poddb812fdd_5246_4625_be37_b6be535eb373.slice - libcontainer container kubepods-burstable-poddb812fdd_5246_4625_be37_b6be535eb373.slice. Mar 13 00:40:31.358767 kubelet[2729]: I0313 00:40:31.358736 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a2a026-4325-468c-8895-cbf23d722c33-config-volume\") pod \"coredns-66bc5c9577-zl8b9\" (UID: \"75a2a026-4325-468c-8895-cbf23d722c33\") " pod="kube-system/coredns-66bc5c9577-zl8b9" Mar 13 00:40:31.359368 kubelet[2729]: I0313 00:40:31.358772 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbqf\" (UniqueName: \"kubernetes.io/projected/db812fdd-5246-4625-be37-b6be535eb373-kube-api-access-tgbqf\") pod \"coredns-66bc5c9577-85w7t\" (UID: \"db812fdd-5246-4625-be37-b6be535eb373\") " pod="kube-system/coredns-66bc5c9577-85w7t" Mar 13 00:40:31.359368 kubelet[2729]: I0313 00:40:31.358791 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db812fdd-5246-4625-be37-b6be535eb373-config-volume\") pod \"coredns-66bc5c9577-85w7t\" (UID: \"db812fdd-5246-4625-be37-b6be535eb373\") " pod="kube-system/coredns-66bc5c9577-85w7t" Mar 13 00:40:31.359368 kubelet[2729]: I0313 00:40:31.358804 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbgdx\" (UniqueName: \"kubernetes.io/projected/75a2a026-4325-468c-8895-cbf23d722c33-kube-api-access-bbgdx\") pod \"coredns-66bc5c9577-zl8b9\" (UID: \"75a2a026-4325-468c-8895-cbf23d722c33\") " pod="kube-system/coredns-66bc5c9577-zl8b9" Mar 13 00:40:31.363169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e504fed2c43de1b2ed37e21b2cd87bf024e05dace38614a97522b8e7fb87824d-rootfs.mount: Deactivated successfully. Mar 13 00:40:31.368278 systemd[1]: Created slice kubepods-burstable-pod75a2a026_4325_468c_8895_cbf23d722c33.slice - libcontainer container kubepods-burstable-pod75a2a026_4325_468c_8895_cbf23d722c33.slice. Mar 13 00:40:31.383209 systemd[1]: Created slice kubepods-besteffort-podcdd33859_30a2_45e0_8214_d92ea489e090.slice - libcontainer container kubepods-besteffort-podcdd33859_30a2_45e0_8214_d92ea489e090.slice. Mar 13 00:40:31.403323 systemd[1]: Created slice kubepods-besteffort-pod08c12ff1_629a_4b68_a14f_c98f5826ec71.slice - libcontainer container kubepods-besteffort-pod08c12ff1_629a_4b68_a14f_c98f5826ec71.slice. Mar 13 00:40:31.425663 systemd[1]: Created slice kubepods-besteffort-poda9a0c97c_89b4_4e9a_99bd_218cd065a879.slice - libcontainer container kubepods-besteffort-poda9a0c97c_89b4_4e9a_99bd_218cd065a879.slice. Mar 13 00:40:31.441265 systemd[1]: Created slice kubepods-besteffort-pod7c42b50a_6791_4fbf_bf4c_a625fe51988b.slice - libcontainer container kubepods-besteffort-pod7c42b50a_6791_4fbf_bf4c_a625fe51988b.slice. Mar 13 00:40:31.446388 systemd[1]: Created slice kubepods-besteffort-pod1eeafc95_a96f_409e_9dc8_0c4ba5d0f282.slice - libcontainer container kubepods-besteffort-pod1eeafc95_a96f_409e_9dc8_0c4ba5d0f282.slice. Mar 13 00:40:31.459755 kubelet[2729]: I0313 00:40:31.459731 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c42b50a-6791-4fbf-bf4c-a625fe51988b-config\") pod \"goldmane-cccfbd5cf-w9499\" (UID: \"7c42b50a-6791-4fbf-bf4c-a625fe51988b\") " pod="calico-system/goldmane-cccfbd5cf-w9499" Mar 13 00:40:31.459928 kubelet[2729]: I0313 00:40:31.459772 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-nginx-config\") pod \"whisker-746cbfd886-rfh2k\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " pod="calico-system/whisker-746cbfd886-rfh2k" Mar 13 00:40:31.459928 kubelet[2729]: I0313 00:40:31.459803 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wzt\" (UniqueName: \"kubernetes.io/projected/08c12ff1-629a-4b68-a14f-c98f5826ec71-kube-api-access-l5wzt\") pod \"calico-apiserver-76fbbd87df-xwv65\" (UID: \"08c12ff1-629a-4b68-a14f-c98f5826ec71\") " pod="calico-system/calico-apiserver-76fbbd87df-xwv65" Mar 13 00:40:31.459928 kubelet[2729]: I0313 00:40:31.459816 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9kq\" (UniqueName: \"kubernetes.io/projected/a9a0c97c-89b4-4e9a-99bd-218cd065a879-kube-api-access-8x9kq\") pod \"calico-apiserver-76fbbd87df-gh6db\" (UID: \"a9a0c97c-89b4-4e9a-99bd-218cd065a879\") " pod="calico-system/calico-apiserver-76fbbd87df-gh6db" Mar 13 00:40:31.459928 kubelet[2729]: I0313 00:40:31.459829 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7lc\" (UniqueName: \"kubernetes.io/projected/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-kube-api-access-gg7lc\") pod \"whisker-746cbfd886-rfh2k\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " pod="calico-system/whisker-746cbfd886-rfh2k" Mar 13 00:40:31.459928 kubelet[2729]: I0313 00:40:31.459844 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-ca-bundle\") pod \"whisker-746cbfd886-rfh2k\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " pod="calico-system/whisker-746cbfd886-rfh2k" Mar 13 00:40:31.460123 kubelet[2729]: I0313 00:40:31.459857 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c42b50a-6791-4fbf-bf4c-a625fe51988b-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-w9499\" (UID: \"7c42b50a-6791-4fbf-bf4c-a625fe51988b\") " pod="calico-system/goldmane-cccfbd5cf-w9499" Mar 13 00:40:31.460123 kubelet[2729]: I0313 00:40:31.459871 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/08c12ff1-629a-4b68-a14f-c98f5826ec71-calico-apiserver-certs\") pod \"calico-apiserver-76fbbd87df-xwv65\" (UID: \"08c12ff1-629a-4b68-a14f-c98f5826ec71\") " pod="calico-system/calico-apiserver-76fbbd87df-xwv65" Mar 13 00:40:31.460123 kubelet[2729]: I0313 00:40:31.459884 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9a0c97c-89b4-4e9a-99bd-218cd065a879-calico-apiserver-certs\") pod \"calico-apiserver-76fbbd87df-gh6db\" (UID: \"a9a0c97c-89b4-4e9a-99bd-218cd065a879\") " pod="calico-system/calico-apiserver-76fbbd87df-gh6db" Mar 13 00:40:31.460123 kubelet[2729]: I0313 00:40:31.459898 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdd33859-30a2-45e0-8214-d92ea489e090-tigera-ca-bundle\") pod \"calico-kube-controllers-6d445c5d86-vjpzq\" (UID: \"cdd33859-30a2-45e0-8214-d92ea489e090\") " pod="calico-system/calico-kube-controllers-6d445c5d86-vjpzq" Mar 13 00:40:31.460123 kubelet[2729]: I0313 00:40:31.459911 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nlm\" (UniqueName: \"kubernetes.io/projected/cdd33859-30a2-45e0-8214-d92ea489e090-kube-api-access-r7nlm\") pod \"calico-kube-controllers-6d445c5d86-vjpzq\" (UID: \"cdd33859-30a2-45e0-8214-d92ea489e090\") " pod="calico-system/calico-kube-controllers-6d445c5d86-vjpzq" Mar 13 00:40:31.460870 kubelet[2729]: I0313 00:40:31.459949 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-backend-key-pair\") pod \"whisker-746cbfd886-rfh2k\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " pod="calico-system/whisker-746cbfd886-rfh2k" Mar 13 00:40:31.460870 kubelet[2729]: I0313 00:40:31.459970 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7c42b50a-6791-4fbf-bf4c-a625fe51988b-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-w9499\" (UID: \"7c42b50a-6791-4fbf-bf4c-a625fe51988b\") " pod="calico-system/goldmane-cccfbd5cf-w9499" Mar 13 00:40:31.460870 kubelet[2729]: I0313 00:40:31.459983 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjkr\" (UniqueName: \"kubernetes.io/projected/7c42b50a-6791-4fbf-bf4c-a625fe51988b-kube-api-access-cmjkr\") pod \"goldmane-cccfbd5cf-w9499\" (UID: \"7c42b50a-6791-4fbf-bf4c-a625fe51988b\") " pod="calico-system/goldmane-cccfbd5cf-w9499" Mar 13 00:40:31.657938 kubelet[2729]: E0313 00:40:31.657897 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:31.659042 containerd[1551]: time="2026-03-13T00:40:31.658943828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-85w7t,Uid:db812fdd-5246-4625-be37-b6be535eb373,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:31.682716 kubelet[2729]: E0313 00:40:31.682674 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:31.684455 containerd[1551]: time="2026-03-13T00:40:31.684344184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zl8b9,Uid:75a2a026-4325-468c-8895-cbf23d722c33,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:31.711426 containerd[1551]: time="2026-03-13T00:40:31.711277508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d445c5d86-vjpzq,Uid:cdd33859-30a2-45e0-8214-d92ea489e090,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:31.715265 containerd[1551]: time="2026-03-13T00:40:31.715017804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-xwv65,Uid:08c12ff1-629a-4b68-a14f-c98f5826ec71,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:31.738179 containerd[1551]: time="2026-03-13T00:40:31.738136771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-gh6db,Uid:a9a0c97c-89b4-4e9a-99bd-218cd065a879,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:31.757251 containerd[1551]: time="2026-03-13T00:40:31.757195179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-746cbfd886-rfh2k,Uid:1eeafc95-a96f-409e-9dc8-0c4ba5d0f282,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:31.760667 containerd[1551]: time="2026-03-13T00:40:31.759661425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9499,Uid:7c42b50a-6791-4fbf-bf4c-a625fe51988b,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:31.890786 containerd[1551]: time="2026-03-13T00:40:31.890659987Z" level=error msg="Failed to destroy network for sandbox \"37c08dfefdd03e8429848dac5524900b5da4f9101660a85b3515e6df8150cd37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.895569 containerd[1551]: time="2026-03-13T00:40:31.895515157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zl8b9,Uid:75a2a026-4325-468c-8895-cbf23d722c33,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c08dfefdd03e8429848dac5524900b5da4f9101660a85b3515e6df8150cd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.896446 kubelet[2729]: E0313 00:40:31.896399 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c08dfefdd03e8429848dac5524900b5da4f9101660a85b3515e6df8150cd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.897111 kubelet[2729]: E0313 00:40:31.896926 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c08dfefdd03e8429848dac5524900b5da4f9101660a85b3515e6df8150cd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zl8b9" Mar 13 00:40:31.897111 kubelet[2729]: E0313 00:40:31.896963 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c08dfefdd03e8429848dac5524900b5da4f9101660a85b3515e6df8150cd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zl8b9" Mar 13 00:40:31.897111 kubelet[2729]: E0313 00:40:31.897060 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zl8b9_kube-system(75a2a026-4325-468c-8895-cbf23d722c33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zl8b9_kube-system(75a2a026-4325-468c-8895-cbf23d722c33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37c08dfefdd03e8429848dac5524900b5da4f9101660a85b3515e6df8150cd37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zl8b9" podUID="75a2a026-4325-468c-8895-cbf23d722c33" Mar 13 00:40:31.932721 containerd[1551]: time="2026-03-13T00:40:31.932498241Z" level=error msg="Failed to destroy network for sandbox \"9e0c3f27106c203a9bdfad442feaa2ffa9ab7d80f5e236983949d91fd44a5988\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.933947 containerd[1551]: time="2026-03-13T00:40:31.933899125Z" level=error msg="Failed to destroy network for sandbox \"1bdd209c143756340182d70a7818de09624e9021b05a45fbf309a6a6766008d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.945512 containerd[1551]: time="2026-03-13T00:40:31.945456112Z" level=error msg="Failed to destroy network for sandbox \"65f9f22489083eba102ff1e6a97298d3aa612e1ec7e2b7aef8d6c29acb08ac34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.946662 containerd[1551]: time="2026-03-13T00:40:31.946545785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-85w7t,Uid:db812fdd-5246-4625-be37-b6be535eb373,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e0c3f27106c203a9bdfad442feaa2ffa9ab7d80f5e236983949d91fd44a5988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.952441 kubelet[2729]: E0313 00:40:31.951404 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e0c3f27106c203a9bdfad442feaa2ffa9ab7d80f5e236983949d91fd44a5988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.952441 kubelet[2729]: E0313 00:40:31.951457 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e0c3f27106c203a9bdfad442feaa2ffa9ab7d80f5e236983949d91fd44a5988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-85w7t" Mar 13 00:40:31.952441 kubelet[2729]: E0313 00:40:31.951480 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e0c3f27106c203a9bdfad442feaa2ffa9ab7d80f5e236983949d91fd44a5988\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-85w7t" Mar 13 00:40:31.952576 kubelet[2729]: E0313 00:40:31.951527 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-85w7t_kube-system(db812fdd-5246-4625-be37-b6be535eb373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-85w7t_kube-system(db812fdd-5246-4625-be37-b6be535eb373)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e0c3f27106c203a9bdfad442feaa2ffa9ab7d80f5e236983949d91fd44a5988\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-85w7t" podUID="db812fdd-5246-4625-be37-b6be535eb373" Mar 13 00:40:31.954197 containerd[1551]: time="2026-03-13T00:40:31.954009920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-gh6db,Uid:a9a0c97c-89b4-4e9a-99bd-218cd065a879,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f9f22489083eba102ff1e6a97298d3aa612e1ec7e2b7aef8d6c29acb08ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.956674 containerd[1551]: time="2026-03-13T00:40:31.956543254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-xwv65,Uid:08c12ff1-629a-4b68-a14f-c98f5826ec71,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bdd209c143756340182d70a7818de09624e9021b05a45fbf309a6a6766008d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.956856 kubelet[2729]: E0313 00:40:31.956827 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bdd209c143756340182d70a7818de09624e9021b05a45fbf309a6a6766008d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.956938 kubelet[2729]: E0313 00:40:31.956922 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bdd209c143756340182d70a7818de09624e9021b05a45fbf309a6a6766008d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76fbbd87df-xwv65" Mar 13 00:40:31.956993 kubelet[2729]: E0313 00:40:31.956980 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bdd209c143756340182d70a7818de09624e9021b05a45fbf309a6a6766008d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76fbbd87df-xwv65" Mar 13 00:40:31.957089 kubelet[2729]: E0313 00:40:31.957066 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76fbbd87df-xwv65_calico-system(08c12ff1-629a-4b68-a14f-c98f5826ec71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76fbbd87df-xwv65_calico-system(08c12ff1-629a-4b68-a14f-c98f5826ec71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bdd209c143756340182d70a7818de09624e9021b05a45fbf309a6a6766008d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-76fbbd87df-xwv65" podUID="08c12ff1-629a-4b68-a14f-c98f5826ec71" Mar 13 00:40:31.957205 kubelet[2729]: E0313 00:40:31.957188 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f9f22489083eba102ff1e6a97298d3aa612e1ec7e2b7aef8d6c29acb08ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.957750 kubelet[2729]: E0313 00:40:31.957651 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f9f22489083eba102ff1e6a97298d3aa612e1ec7e2b7aef8d6c29acb08ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76fbbd87df-gh6db" Mar 13 00:40:31.957750 kubelet[2729]: E0313 00:40:31.957675 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f9f22489083eba102ff1e6a97298d3aa612e1ec7e2b7aef8d6c29acb08ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-76fbbd87df-gh6db" Mar 13 00:40:31.957750 kubelet[2729]: E0313 00:40:31.957711 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76fbbd87df-gh6db_calico-system(a9a0c97c-89b4-4e9a-99bd-218cd065a879)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76fbbd87df-gh6db_calico-system(a9a0c97c-89b4-4e9a-99bd-218cd065a879)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65f9f22489083eba102ff1e6a97298d3aa612e1ec7e2b7aef8d6c29acb08ac34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-76fbbd87df-gh6db" podUID="a9a0c97c-89b4-4e9a-99bd-218cd065a879" Mar 13 00:40:31.975183 containerd[1551]: time="2026-03-13T00:40:31.974519111Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:40:31.994060 containerd[1551]: time="2026-03-13T00:40:31.993986650Z" level=error msg="Failed to destroy network for sandbox \"c6bc142cf48c49c9f9a11a20b660f2d2c8f0a7119cafca10713a228e6ad3c75d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:31.996716 containerd[1551]: time="2026-03-13T00:40:31.996323332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-746cbfd886-rfh2k,Uid:1eeafc95-a96f-409e-9dc8-0c4ba5d0f282,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bc142cf48c49c9f9a11a20b660f2d2c8f0a7119cafca10713a228e6ad3c75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.003160 containerd[1551]: time="2026-03-13T00:40:32.003124026Z" level=info msg="Container e10fc635e210aa9bf21fe223a48104a0867aba7eb8183fe4b2d8ab1d80e69adc: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:32.005259 kubelet[2729]: E0313 00:40:32.005214 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bc142cf48c49c9f9a11a20b660f2d2c8f0a7119cafca10713a228e6ad3c75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.005339 kubelet[2729]: E0313 00:40:32.005269 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bc142cf48c49c9f9a11a20b660f2d2c8f0a7119cafca10713a228e6ad3c75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-746cbfd886-rfh2k" Mar 13 00:40:32.005339 kubelet[2729]: E0313 00:40:32.005288 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bc142cf48c49c9f9a11a20b660f2d2c8f0a7119cafca10713a228e6ad3c75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-746cbfd886-rfh2k" Mar 13 00:40:32.005391 kubelet[2729]: E0313 00:40:32.005332 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-746cbfd886-rfh2k_calico-system(1eeafc95-a96f-409e-9dc8-0c4ba5d0f282)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-746cbfd886-rfh2k_calico-system(1eeafc95-a96f-409e-9dc8-0c4ba5d0f282)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6bc142cf48c49c9f9a11a20b660f2d2c8f0a7119cafca10713a228e6ad3c75d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-746cbfd886-rfh2k" podUID="1eeafc95-a96f-409e-9dc8-0c4ba5d0f282" Mar 13 00:40:32.009876 containerd[1551]: time="2026-03-13T00:40:32.009810249Z" level=error msg="Failed to destroy network for sandbox \"8d329ecfa85ae239c723d11deb2a4f3db43ef9722f8dd05b357cef1088d09d4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.012687 containerd[1551]: time="2026-03-13T00:40:32.012633312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d445c5d86-vjpzq,Uid:cdd33859-30a2-45e0-8214-d92ea489e090,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d329ecfa85ae239c723d11deb2a4f3db43ef9722f8dd05b357cef1088d09d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.012924 kubelet[2729]: E0313 00:40:32.012801 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d329ecfa85ae239c723d11deb2a4f3db43ef9722f8dd05b357cef1088d09d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.012924 kubelet[2729]: E0313 00:40:32.012836 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d329ecfa85ae239c723d11deb2a4f3db43ef9722f8dd05b357cef1088d09d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d445c5d86-vjpzq" Mar 13 00:40:32.012924 kubelet[2729]: E0313 00:40:32.012853 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d329ecfa85ae239c723d11deb2a4f3db43ef9722f8dd05b357cef1088d09d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d445c5d86-vjpzq" Mar 13 00:40:32.013253 kubelet[2729]: E0313 00:40:32.012905 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d445c5d86-vjpzq_calico-system(cdd33859-30a2-45e0-8214-d92ea489e090)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d445c5d86-vjpzq_calico-system(cdd33859-30a2-45e0-8214-d92ea489e090)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d329ecfa85ae239c723d11deb2a4f3db43ef9722f8dd05b357cef1088d09d4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d445c5d86-vjpzq" podUID="cdd33859-30a2-45e0-8214-d92ea489e090" Mar 13 00:40:32.015868 containerd[1551]: time="2026-03-13T00:40:32.015689268Z" level=error msg="Failed to destroy network for sandbox \"4a66ac41f882f6bb88b0a1da2811bdb7d3e6d5ed78d926804e8b4fde6263d490\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.016395 containerd[1551]: time="2026-03-13T00:40:32.016361266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9499,Uid:7c42b50a-6791-4fbf-bf4c-a625fe51988b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a66ac41f882f6bb88b0a1da2811bdb7d3e6d5ed78d926804e8b4fde6263d490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.016543 kubelet[2729]: E0313 00:40:32.016500 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a66ac41f882f6bb88b0a1da2811bdb7d3e6d5ed78d926804e8b4fde6263d490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:32.016600 kubelet[2729]: E0313 00:40:32.016572 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a66ac41f882f6bb88b0a1da2811bdb7d3e6d5ed78d926804e8b4fde6263d490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-w9499" Mar 13 00:40:32.016600 kubelet[2729]: E0313 00:40:32.016589 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a66ac41f882f6bb88b0a1da2811bdb7d3e6d5ed78d926804e8b4fde6263d490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-w9499" Mar 13 00:40:32.016743 kubelet[2729]: E0313 00:40:32.016668 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-w9499_calico-system(7c42b50a-6791-4fbf-bf4c-a625fe51988b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-w9499_calico-system(7c42b50a-6791-4fbf-bf4c-a625fe51988b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a66ac41f882f6bb88b0a1da2811bdb7d3e6d5ed78d926804e8b4fde6263d490\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-w9499" podUID="7c42b50a-6791-4fbf-bf4c-a625fe51988b" Mar 13 00:40:32.018873 containerd[1551]: time="2026-03-13T00:40:32.018844045Z" level=info msg="CreateContainer within sandbox \"80175d14ae56ef85fa8b600393ad4c334d137fe18f03b9c44b3332edc4d34351\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e10fc635e210aa9bf21fe223a48104a0867aba7eb8183fe4b2d8ab1d80e69adc\"" Mar 13 00:40:32.019971 containerd[1551]: time="2026-03-13T00:40:32.019950856Z" level=info msg="StartContainer for \"e10fc635e210aa9bf21fe223a48104a0867aba7eb8183fe4b2d8ab1d80e69adc\"" Mar 13 00:40:32.023835 containerd[1551]: time="2026-03-13T00:40:32.021323232Z" level=info msg="connecting to shim e10fc635e210aa9bf21fe223a48104a0867aba7eb8183fe4b2d8ab1d80e69adc" address="unix:///run/containerd/s/6cae6bf5fdb83601027ecc5bf82e8822a5abb4f2c663de19abca82e94afe31fe" protocol=ttrpc version=3 Mar 13 00:40:32.040814 systemd[1]: Started cri-containerd-e10fc635e210aa9bf21fe223a48104a0867aba7eb8183fe4b2d8ab1d80e69adc.scope - libcontainer container e10fc635e210aa9bf21fe223a48104a0867aba7eb8183fe4b2d8ab1d80e69adc. Mar 13 00:40:32.121655 containerd[1551]: time="2026-03-13T00:40:32.121534341Z" level=info msg="StartContainer for \"e10fc635e210aa9bf21fe223a48104a0867aba7eb8183fe4b2d8ab1d80e69adc\" returns successfully" Mar 13 00:40:32.669508 systemd[1]: run-netns-cni\x2d5bd74641\x2d9a3a\x2d103f\x2d8bf9\x2da0ebd178fcda.mount: Deactivated successfully. Mar 13 00:40:32.669632 systemd[1]: run-netns-cni\x2d6dc73363\x2d373c\x2da28c\x2de94b\x2dd0b59f4a39a3.mount: Deactivated successfully. Mar 13 00:40:32.669704 systemd[1]: run-netns-cni\x2d4d043c1d\x2d37d1\x2dec4d\x2d5fd5\x2d68da684368d0.mount: Deactivated successfully. Mar 13 00:40:32.669781 systemd[1]: run-netns-cni\x2dc96c7f1c\x2df9ad\x2de14d\x2d22b7\x2de17f178bf0da.mount: Deactivated successfully. Mar 13 00:40:32.669853 systemd[1]: run-netns-cni\x2db4f0405b\x2d6d76\x2da418\x2d4e21\x2d06b244129690.mount: Deactivated successfully. Mar 13 00:40:32.763564 systemd[1]: Created slice kubepods-besteffort-pod648ae89e_362f_4fdb_8142_d604f3582645.slice - libcontainer container kubepods-besteffort-pod648ae89e_362f_4fdb_8142_d604f3582645.slice. Mar 13 00:40:32.767919 containerd[1551]: time="2026-03-13T00:40:32.767879517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8v2lv,Uid:648ae89e-362f-4fdb-8142-d604f3582645,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:32.877808 systemd-networkd[1428]: cali55a0bf99f13: Link UP Mar 13 00:40:32.878789 systemd-networkd[1428]: cali55a0bf99f13: Gained carrier Mar 13 00:40:32.901110 containerd[1551]: 2026-03-13 00:40:32.792 [ERROR][3772] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:40:32.901110 containerd[1551]: 2026-03-13 00:40:32.806 [INFO][3772] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-csi--node--driver--8v2lv-eth0 csi-node-driver- calico-system 648ae89e-362f-4fdb-8142-d604f3582645 746 0 2026-03-13 00:40:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-108-24 csi-node-driver-8v2lv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali55a0bf99f13 [] [] }} ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-" Mar 13 00:40:32.901110 containerd[1551]: 2026-03-13 00:40:32.806 [INFO][3772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" Mar 13 00:40:32.901110 containerd[1551]: 2026-03-13 00:40:32.834 [INFO][3784] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" HandleID="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Workload="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.841 [INFO][3784] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" HandleID="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Workload="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-24", "pod":"csi-node-driver-8v2lv", "timestamp":"2026-03-13 00:40:32.834680805 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.841 [INFO][3784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.841 [INFO][3784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.841 [INFO][3784] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.843 [INFO][3784] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" host="172-236-108-24" Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.847 [INFO][3784] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.851 [INFO][3784] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.853 [INFO][3784] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:32.901310 containerd[1551]: 2026-03-13 00:40:32.854 [INFO][3784] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:32.901488 containerd[1551]: 2026-03-13 00:40:32.854 [INFO][3784] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" host="172-236-108-24" Mar 13 00:40:32.901488 containerd[1551]: 2026-03-13 00:40:32.856 [INFO][3784] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248 Mar 13 00:40:32.901488 containerd[1551]: 2026-03-13 00:40:32.860 [INFO][3784] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" host="172-236-108-24" Mar 13 00:40:32.901488 containerd[1551]: 2026-03-13 00:40:32.863 [INFO][3784] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.1/26] block=192.168.51.0/26 handle="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" host="172-236-108-24" Mar 13 00:40:32.901488 containerd[1551]: 2026-03-13 00:40:32.863 [INFO][3784] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.1/26] handle="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" host="172-236-108-24" Mar 13 00:40:32.901488 containerd[1551]: 2026-03-13 00:40:32.863 [INFO][3784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:32.901488 containerd[1551]: 2026-03-13 00:40:32.863 [INFO][3784] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.1/26] IPv6=[] ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" HandleID="k8s-pod-network.8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Workload="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" Mar 13 00:40:32.901819 containerd[1551]: 2026-03-13 00:40:32.868 [INFO][3772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-csi--node--driver--8v2lv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"648ae89e-362f-4fdb-8142-d604f3582645", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"csi-node-driver-8v2lv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55a0bf99f13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:32.901967 containerd[1551]: 2026-03-13 00:40:32.868 [INFO][3772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.1/32] ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" Mar 13 00:40:32.901967 containerd[1551]: 2026-03-13 00:40:32.868 [INFO][3772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55a0bf99f13 ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" Mar 13 00:40:32.901967 containerd[1551]: 2026-03-13 00:40:32.878 [INFO][3772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" Mar 13 00:40:32.902105 containerd[1551]: 2026-03-13 00:40:32.878 [INFO][3772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-csi--node--driver--8v2lv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"648ae89e-362f-4fdb-8142-d604f3582645", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248", Pod:"csi-node-driver-8v2lv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55a0bf99f13", MAC:"ca:30:45:ba:7e:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:32.902171 containerd[1551]: 2026-03-13 00:40:32.897 [INFO][3772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" Namespace="calico-system" Pod="csi-node-driver-8v2lv" WorkloadEndpoint="172--236--108--24-k8s-csi--node--driver--8v2lv-eth0" Mar 13 00:40:32.935685 containerd[1551]: time="2026-03-13T00:40:32.934801228Z" level=info msg="connecting to shim 8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248" address="unix:///run/containerd/s/8aa5e7154c075b643737069bbacc9859d3571650565ad6353b66e5bf7e9f8aed" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:32.974384 kubelet[2729]: I0313 00:40:32.974314 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4j2bd" podStartSLOduration=3.26277393 podStartE2EDuration="12.974299274s" podCreationTimestamp="2026-03-13 00:40:20 +0000 UTC" firstStartedPulling="2026-03-13 00:40:20.943131572 +0000 UTC m=+18.278684530" lastFinishedPulling="2026-03-13 00:40:30.654656916 +0000 UTC m=+27.990209874" observedRunningTime="2026-03-13 00:40:32.97391127 +0000 UTC m=+30.309464228" watchObservedRunningTime="2026-03-13 00:40:32.974299274 +0000 UTC m=+30.309852242" Mar 13 00:40:32.978836 systemd[1]: Started cri-containerd-8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248.scope - libcontainer container 8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248. Mar 13 00:40:33.032735 containerd[1551]: time="2026-03-13T00:40:33.032706573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8v2lv,Uid:648ae89e-362f-4fdb-8142-d604f3582645,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248\"" Mar 13 00:40:33.037893 containerd[1551]: time="2026-03-13T00:40:33.037858278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:40:33.072645 kubelet[2729]: I0313 00:40:33.072578 2729 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-ca-bundle\") pod \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " Mar 13 00:40:33.072732 kubelet[2729]: I0313 00:40:33.072649 2729 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-backend-key-pair\") pod \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " Mar 13 00:40:33.072732 kubelet[2729]: I0313 00:40:33.072676 2729 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-nginx-config\") pod \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " Mar 13 00:40:33.072732 kubelet[2729]: I0313 00:40:33.072692 2729 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg7lc\" (UniqueName: \"kubernetes.io/projected/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-kube-api-access-gg7lc\") pod \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\" (UID: \"1eeafc95-a96f-409e-9dc8-0c4ba5d0f282\") " Mar 13 00:40:33.074000 kubelet[2729]: I0313 00:40:33.073342 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282" (UID: "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:40:33.074000 kubelet[2729]: I0313 00:40:33.073668 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282" (UID: "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:40:33.080642 kubelet[2729]: I0313 00:40:33.079782 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-kube-api-access-gg7lc" (OuterVolumeSpecName: "kube-api-access-gg7lc") pod "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282" (UID: "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282"). InnerVolumeSpecName "kube-api-access-gg7lc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:40:33.080791 kubelet[2729]: I0313 00:40:33.080767 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282" (UID: "1eeafc95-a96f-409e-9dc8-0c4ba5d0f282"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:40:33.080907 systemd[1]: var-lib-kubelet-pods-1eeafc95\x2da96f\x2d409e\x2d9dc8\x2d0c4ba5d0f282-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgg7lc.mount: Deactivated successfully. Mar 13 00:40:33.081285 systemd[1]: var-lib-kubelet-pods-1eeafc95\x2da96f\x2d409e\x2d9dc8\x2d0c4ba5d0f282-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:40:33.173912 kubelet[2729]: I0313 00:40:33.173875 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-ca-bundle\") on node \"172-236-108-24\" DevicePath \"\"" Mar 13 00:40:33.173912 kubelet[2729]: I0313 00:40:33.173903 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-whisker-backend-key-pair\") on node \"172-236-108-24\" DevicePath \"\"" Mar 13 00:40:33.173912 kubelet[2729]: I0313 00:40:33.173914 2729 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-nginx-config\") on node \"172-236-108-24\" DevicePath \"\"" Mar 13 00:40:33.174054 kubelet[2729]: I0313 00:40:33.173925 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gg7lc\" (UniqueName: \"kubernetes.io/projected/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282-kube-api-access-gg7lc\") on node \"172-236-108-24\" DevicePath \"\"" Mar 13 00:40:33.967043 systemd[1]: Removed slice kubepods-besteffort-pod1eeafc95_a96f_409e_9dc8_0c4ba5d0f282.slice - libcontainer container kubepods-besteffort-pod1eeafc95_a96f_409e_9dc8_0c4ba5d0f282.slice. Mar 13 00:40:34.030240 systemd[1]: Created slice kubepods-besteffort-pod15be5a3f_cdbd_4038_97e8_316b73781a20.slice - libcontainer container kubepods-besteffort-pod15be5a3f_cdbd_4038_97e8_316b73781a20.slice. Mar 13 00:40:34.081903 kubelet[2729]: I0313 00:40:34.081857 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15be5a3f-cdbd-4038-97e8-316b73781a20-whisker-ca-bundle\") pod \"whisker-5544f6b947-lf2cw\" (UID: \"15be5a3f-cdbd-4038-97e8-316b73781a20\") " pod="calico-system/whisker-5544f6b947-lf2cw" Mar 13 00:40:34.081903 kubelet[2729]: I0313 00:40:34.081890 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpd2c\" (UniqueName: \"kubernetes.io/projected/15be5a3f-cdbd-4038-97e8-316b73781a20-kube-api-access-cpd2c\") pod \"whisker-5544f6b947-lf2cw\" (UID: \"15be5a3f-cdbd-4038-97e8-316b73781a20\") " pod="calico-system/whisker-5544f6b947-lf2cw" Mar 13 00:40:34.082460 kubelet[2729]: I0313 00:40:34.081912 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/15be5a3f-cdbd-4038-97e8-316b73781a20-nginx-config\") pod \"whisker-5544f6b947-lf2cw\" (UID: \"15be5a3f-cdbd-4038-97e8-316b73781a20\") " pod="calico-system/whisker-5544f6b947-lf2cw" Mar 13 00:40:34.082460 kubelet[2729]: I0313 00:40:34.081931 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15be5a3f-cdbd-4038-97e8-316b73781a20-whisker-backend-key-pair\") pod \"whisker-5544f6b947-lf2cw\" (UID: \"15be5a3f-cdbd-4038-97e8-316b73781a20\") " pod="calico-system/whisker-5544f6b947-lf2cw" Mar 13 00:40:34.338451 containerd[1551]: time="2026-03-13T00:40:34.338404201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5544f6b947-lf2cw,Uid:15be5a3f-cdbd-4038-97e8-316b73781a20,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:34.463035 systemd-networkd[1428]: calia472d35d8a9: Link UP Mar 13 00:40:34.465761 systemd-networkd[1428]: calia472d35d8a9: Gained carrier Mar 13 00:40:34.477679 containerd[1551]: 2026-03-13 00:40:34.368 [ERROR][3994] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:40:34.477679 containerd[1551]: 2026-03-13 00:40:34.381 [INFO][3994] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0 whisker-5544f6b947- calico-system 15be5a3f-cdbd-4038-97e8-316b73781a20 937 0 2026-03-13 00:40:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5544f6b947 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-236-108-24 whisker-5544f6b947-lf2cw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia472d35d8a9 [] [] }} ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-" Mar 13 00:40:34.477679 containerd[1551]: 2026-03-13 00:40:34.381 [INFO][3994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" Mar 13 00:40:34.477679 containerd[1551]: 2026-03-13 00:40:34.411 [INFO][4006] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" HandleID="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Workload="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.417 [INFO][4006] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" HandleID="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Workload="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277500), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-24", "pod":"whisker-5544f6b947-lf2cw", "timestamp":"2026-03-13 00:40:34.411776518 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000362dc0)} Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.417 [INFO][4006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.417 [INFO][4006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.417 [INFO][4006] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.419 [INFO][4006] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" host="172-236-108-24" Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.424 [INFO][4006] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.428 [INFO][4006] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.430 [INFO][4006] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:34.477958 containerd[1551]: 2026-03-13 00:40:34.433 [INFO][4006] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:34.478154 containerd[1551]: 2026-03-13 00:40:34.433 [INFO][4006] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" host="172-236-108-24" Mar 13 00:40:34.478154 containerd[1551]: 2026-03-13 00:40:34.436 [INFO][4006] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd Mar 13 00:40:34.478154 containerd[1551]: 2026-03-13 00:40:34.441 [INFO][4006] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" host="172-236-108-24" Mar 13 00:40:34.478154 containerd[1551]: 2026-03-13 00:40:34.447 [INFO][4006] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.2/26] block=192.168.51.0/26 handle="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" host="172-236-108-24" Mar 13 00:40:34.478154 containerd[1551]: 2026-03-13 00:40:34.447 [INFO][4006] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.2/26] handle="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" host="172-236-108-24" Mar 13 00:40:34.478154 containerd[1551]: 2026-03-13 00:40:34.448 [INFO][4006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:34.478154 containerd[1551]: 2026-03-13 00:40:34.448 [INFO][4006] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.2/26] IPv6=[] ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" HandleID="k8s-pod-network.52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Workload="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" Mar 13 00:40:34.478296 containerd[1551]: 2026-03-13 00:40:34.451 [INFO][3994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0", GenerateName:"whisker-5544f6b947-", Namespace:"calico-system", SelfLink:"", UID:"15be5a3f-cdbd-4038-97e8-316b73781a20", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5544f6b947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"whisker-5544f6b947-lf2cw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia472d35d8a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:34.478296 containerd[1551]: 2026-03-13 00:40:34.451 [INFO][3994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.2/32] ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" Mar 13 00:40:34.478370 containerd[1551]: 2026-03-13 00:40:34.452 [INFO][3994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia472d35d8a9 ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" Mar 13 00:40:34.478370 containerd[1551]: 2026-03-13 00:40:34.460 [INFO][3994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" Mar 13 00:40:34.478411 containerd[1551]: 2026-03-13 00:40:34.460 [INFO][3994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0", GenerateName:"whisker-5544f6b947-", Namespace:"calico-system", SelfLink:"", UID:"15be5a3f-cdbd-4038-97e8-316b73781a20", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5544f6b947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd", Pod:"whisker-5544f6b947-lf2cw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia472d35d8a9", MAC:"2a:cc:88:90:e7:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:34.478462 containerd[1551]: 2026-03-13 00:40:34.472 [INFO][3994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" Namespace="calico-system" Pod="whisker-5544f6b947-lf2cw" WorkloadEndpoint="172--236--108--24-k8s-whisker--5544f6b947--lf2cw-eth0" Mar 13 00:40:34.500422 containerd[1551]: time="2026-03-13T00:40:34.500340143Z" level=info msg="connecting to shim 52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd" address="unix:///run/containerd/s/54a9c03382b1671648e9063feccf0909df365619151eff95a13fd020b133c5d1" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:34.526740 systemd[1]: Started cri-containerd-52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd.scope - libcontainer container 52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd. Mar 13 00:40:34.589401 containerd[1551]: time="2026-03-13T00:40:34.589313326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5544f6b947-lf2cw,Uid:15be5a3f-cdbd-4038-97e8-316b73781a20,Namespace:calico-system,Attempt:0,} returns sandbox id \"52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd\"" Mar 13 00:40:34.764358 kubelet[2729]: I0313 00:40:34.764299 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eeafc95-a96f-409e-9dc8-0c4ba5d0f282" path="/var/lib/kubelet/pods/1eeafc95-a96f-409e-9dc8-0c4ba5d0f282/volumes" Mar 13 00:40:34.832286 containerd[1551]: time="2026-03-13T00:40:34.832246166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:34.832962 containerd[1551]: time="2026-03-13T00:40:34.832936508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:40:34.833509 containerd[1551]: time="2026-03-13T00:40:34.833455388Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:34.835080 containerd[1551]: time="2026-03-13T00:40:34.835044169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:34.835808 containerd[1551]: time="2026-03-13T00:40:34.835651672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.79775646s" Mar 13 00:40:34.835808 containerd[1551]: time="2026-03-13T00:40:34.835679952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:40:34.837736 containerd[1551]: time="2026-03-13T00:40:34.837710115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:40:34.840132 containerd[1551]: time="2026-03-13T00:40:34.839872405Z" level=info msg="CreateContainer within sandbox \"8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:40:34.850569 containerd[1551]: time="2026-03-13T00:40:34.848828511Z" level=info msg="Container a7406f5893063b9f6ef910a9ae02e0c1c2a8ff79bfaeda91c350e1973595fd25: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:34.856580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583052160.mount: Deactivated successfully. Mar 13 00:40:34.860640 containerd[1551]: time="2026-03-13T00:40:34.860263794Z" level=info msg="CreateContainer within sandbox \"8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a7406f5893063b9f6ef910a9ae02e0c1c2a8ff79bfaeda91c350e1973595fd25\"" Mar 13 00:40:34.863032 containerd[1551]: time="2026-03-13T00:40:34.860866395Z" level=info msg="StartContainer for \"a7406f5893063b9f6ef910a9ae02e0c1c2a8ff79bfaeda91c350e1973595fd25\"" Mar 13 00:40:34.863458 containerd[1551]: time="2026-03-13T00:40:34.863410305Z" level=info msg="connecting to shim a7406f5893063b9f6ef910a9ae02e0c1c2a8ff79bfaeda91c350e1973595fd25" address="unix:///run/containerd/s/8aa5e7154c075b643737069bbacc9859d3571650565ad6353b66e5bf7e9f8aed" protocol=ttrpc version=3 Mar 13 00:40:34.873780 systemd-networkd[1428]: cali55a0bf99f13: Gained IPv6LL Mar 13 00:40:34.889737 systemd[1]: Started cri-containerd-a7406f5893063b9f6ef910a9ae02e0c1c2a8ff79bfaeda91c350e1973595fd25.scope - libcontainer container a7406f5893063b9f6ef910a9ae02e0c1c2a8ff79bfaeda91c350e1973595fd25. Mar 13 00:40:34.976776 containerd[1551]: time="2026-03-13T00:40:34.976735637Z" level=info msg="StartContainer for \"a7406f5893063b9f6ef910a9ae02e0c1c2a8ff79bfaeda91c350e1973595fd25\" returns successfully" Mar 13 00:40:35.502283 containerd[1551]: time="2026-03-13T00:40:35.502242501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:35.503152 containerd[1551]: time="2026-03-13T00:40:35.503005869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:40:35.503700 containerd[1551]: time="2026-03-13T00:40:35.503667471Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:35.505216 containerd[1551]: time="2026-03-13T00:40:35.505185904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:35.505883 containerd[1551]: time="2026-03-13T00:40:35.505856489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 667.290211ms" Mar 13 00:40:35.505960 containerd[1551]: time="2026-03-13T00:40:35.505943881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:40:35.507442 containerd[1551]: time="2026-03-13T00:40:35.506881340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:40:35.509931 containerd[1551]: time="2026-03-13T00:40:35.509901540Z" level=info msg="CreateContainer within sandbox \"52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:40:35.518318 containerd[1551]: time="2026-03-13T00:40:35.517715073Z" level=info msg="Container 0388888b551aeceda47755523135a44097620dcf396ffc8ab94b9b7dceef9aee: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:35.527737 containerd[1551]: time="2026-03-13T00:40:35.527692606Z" level=info msg="CreateContainer within sandbox \"52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0388888b551aeceda47755523135a44097620dcf396ffc8ab94b9b7dceef9aee\"" Mar 13 00:40:35.528657 containerd[1551]: time="2026-03-13T00:40:35.528468388Z" level=info msg="StartContainer for \"0388888b551aeceda47755523135a44097620dcf396ffc8ab94b9b7dceef9aee\"" Mar 13 00:40:35.529849 containerd[1551]: time="2026-03-13T00:40:35.529824544Z" level=info msg="connecting to shim 0388888b551aeceda47755523135a44097620dcf396ffc8ab94b9b7dceef9aee" address="unix:///run/containerd/s/54a9c03382b1671648e9063feccf0909df365619151eff95a13fd020b133c5d1" protocol=ttrpc version=3 Mar 13 00:40:35.550777 systemd[1]: Started cri-containerd-0388888b551aeceda47755523135a44097620dcf396ffc8ab94b9b7dceef9aee.scope - libcontainer container 0388888b551aeceda47755523135a44097620dcf396ffc8ab94b9b7dceef9aee. Mar 13 00:40:35.606568 containerd[1551]: time="2026-03-13T00:40:35.606516230Z" level=info msg="StartContainer for \"0388888b551aeceda47755523135a44097620dcf396ffc8ab94b9b7dceef9aee\" returns successfully" Mar 13 00:40:36.344322 systemd-networkd[1428]: calia472d35d8a9: Gained IPv6LL Mar 13 00:40:37.105902 containerd[1551]: time="2026-03-13T00:40:37.105827958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:37.106852 containerd[1551]: time="2026-03-13T00:40:37.106709063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:40:37.107286 containerd[1551]: time="2026-03-13T00:40:37.107256601Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:37.109078 containerd[1551]: time="2026-03-13T00:40:37.109051563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:37.109991 containerd[1551]: time="2026-03-13T00:40:37.109646956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.602735816s" Mar 13 00:40:37.109991 containerd[1551]: time="2026-03-13T00:40:37.109675245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:40:37.110827 containerd[1551]: time="2026-03-13T00:40:37.110809194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:40:37.120640 containerd[1551]: time="2026-03-13T00:40:37.119836893Z" level=info msg="CreateContainer within sandbox \"8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:40:37.133461 containerd[1551]: time="2026-03-13T00:40:37.131898105Z" level=info msg="Container cded037339c0eb777ccd2281f0973266e875d52c4182cc6fbb6209172da53ce0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:37.138856 containerd[1551]: time="2026-03-13T00:40:37.138811178Z" level=info msg="CreateContainer within sandbox \"8a6e6c3eb38ff382d4b8a2e1856055d0f2cd931b3bd69795d94b1df4b97a0248\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cded037339c0eb777ccd2281f0973266e875d52c4182cc6fbb6209172da53ce0\"" Mar 13 00:40:37.139663 containerd[1551]: time="2026-03-13T00:40:37.139570984Z" level=info msg="StartContainer for \"cded037339c0eb777ccd2281f0973266e875d52c4182cc6fbb6209172da53ce0\"" Mar 13 00:40:37.141431 containerd[1551]: time="2026-03-13T00:40:37.141405139Z" level=info msg="connecting to shim cded037339c0eb777ccd2281f0973266e875d52c4182cc6fbb6209172da53ce0" address="unix:///run/containerd/s/8aa5e7154c075b643737069bbacc9859d3571650565ad6353b66e5bf7e9f8aed" protocol=ttrpc version=3 Mar 13 00:40:37.165767 systemd[1]: Started cri-containerd-cded037339c0eb777ccd2281f0973266e875d52c4182cc6fbb6209172da53ce0.scope - libcontainer container cded037339c0eb777ccd2281f0973266e875d52c4182cc6fbb6209172da53ce0. Mar 13 00:40:37.247861 containerd[1551]: time="2026-03-13T00:40:37.247571229Z" level=info msg="StartContainer for \"cded037339c0eb777ccd2281f0973266e875d52c4182cc6fbb6209172da53ce0\" returns successfully" Mar 13 00:40:37.844954 kubelet[2729]: I0313 00:40:37.844917 2729 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:40:37.846331 kubelet[2729]: I0313 00:40:37.846277 2729 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:40:37.997206 kubelet[2729]: I0313 00:40:37.997148 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8v2lv" podStartSLOduration=13.921848013 podStartE2EDuration="17.995679799s" podCreationTimestamp="2026-03-13 00:40:20 +0000 UTC" firstStartedPulling="2026-03-13 00:40:33.036538665 +0000 UTC m=+30.372091613" lastFinishedPulling="2026-03-13 00:40:37.110370441 +0000 UTC m=+34.445923399" observedRunningTime="2026-03-13 00:40:37.995182217 +0000 UTC m=+35.330735175" watchObservedRunningTime="2026-03-13 00:40:37.995679799 +0000 UTC m=+35.331232747" Mar 13 00:40:38.613561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572620549.mount: Deactivated successfully. Mar 13 00:40:38.625167 containerd[1551]: time="2026-03-13T00:40:38.625139642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:38.625935 containerd[1551]: time="2026-03-13T00:40:38.625821595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:40:38.626398 containerd[1551]: time="2026-03-13T00:40:38.626373427Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:38.628125 containerd[1551]: time="2026-03-13T00:40:38.628104718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:38.628545 containerd[1551]: time="2026-03-13T00:40:38.628513605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.517477068s" Mar 13 00:40:38.628792 containerd[1551]: time="2026-03-13T00:40:38.628772057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:40:38.636035 containerd[1551]: time="2026-03-13T00:40:38.636002915Z" level=info msg="CreateContainer within sandbox \"52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:40:38.641848 containerd[1551]: time="2026-03-13T00:40:38.641824793Z" level=info msg="Container 76f022103efcb04030ba254a5278bc86e5898b5b1acbdafc8ff7f4a33c892ae9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:38.647153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709153942.mount: Deactivated successfully. Mar 13 00:40:38.661438 containerd[1551]: time="2026-03-13T00:40:38.661410239Z" level=info msg="CreateContainer within sandbox \"52b4f5070132dd07b964f61d39e9fd8cc6964137ac711d4d7008d7d9ff79e9cd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"76f022103efcb04030ba254a5278bc86e5898b5b1acbdafc8ff7f4a33c892ae9\"" Mar 13 00:40:38.661936 containerd[1551]: time="2026-03-13T00:40:38.661885788Z" level=info msg="StartContainer for \"76f022103efcb04030ba254a5278bc86e5898b5b1acbdafc8ff7f4a33c892ae9\"" Mar 13 00:40:38.663391 containerd[1551]: time="2026-03-13T00:40:38.663342522Z" level=info msg="connecting to shim 76f022103efcb04030ba254a5278bc86e5898b5b1acbdafc8ff7f4a33c892ae9" address="unix:///run/containerd/s/54a9c03382b1671648e9063feccf0909df365619151eff95a13fd020b133c5d1" protocol=ttrpc version=3 Mar 13 00:40:38.690998 systemd[1]: Started cri-containerd-76f022103efcb04030ba254a5278bc86e5898b5b1acbdafc8ff7f4a33c892ae9.scope - libcontainer container 76f022103efcb04030ba254a5278bc86e5898b5b1acbdafc8ff7f4a33c892ae9. Mar 13 00:40:38.742256 containerd[1551]: time="2026-03-13T00:40:38.742225138Z" level=info msg="StartContainer for \"76f022103efcb04030ba254a5278bc86e5898b5b1acbdafc8ff7f4a33c892ae9\" returns successfully" Mar 13 00:40:42.759634 kubelet[2729]: E0313 00:40:42.759568 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:42.762180 containerd[1551]: time="2026-03-13T00:40:42.762030454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zl8b9,Uid:75a2a026-4325-468c-8895-cbf23d722c33,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:42.763247 containerd[1551]: time="2026-03-13T00:40:42.763194439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-xwv65,Uid:08c12ff1-629a-4b68-a14f-c98f5826ec71,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:42.763490 containerd[1551]: time="2026-03-13T00:40:42.763450358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-gh6db,Uid:a9a0c97c-89b4-4e9a-99bd-218cd065a879,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:42.959118 systemd-networkd[1428]: cali6c39f8c92a3: Link UP Mar 13 00:40:42.960843 systemd-networkd[1428]: cali6c39f8c92a3: Gained carrier Mar 13 00:40:42.969513 kubelet[2729]: I0313 00:40:42.968515 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5544f6b947-lf2cw" podStartSLOduration=4.928971615 podStartE2EDuration="8.968496523s" podCreationTimestamp="2026-03-13 00:40:34 +0000 UTC" firstStartedPulling="2026-03-13 00:40:34.591956123 +0000 UTC m=+31.927509082" lastFinishedPulling="2026-03-13 00:40:38.631481042 +0000 UTC m=+35.967033990" observedRunningTime="2026-03-13 00:40:39.001125162 +0000 UTC m=+36.336678110" watchObservedRunningTime="2026-03-13 00:40:42.968496523 +0000 UTC m=+40.304049471" Mar 13 00:40:42.971788 containerd[1551]: 2026-03-13 00:40:42.809 [ERROR][4399] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:40:42.971788 containerd[1551]: 2026-03-13 00:40:42.820 [INFO][4399] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0 coredns-66bc5c9577- kube-system 75a2a026-4325-468c-8895-cbf23d722c33 875 0 2026-03-13 00:40:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-108-24 coredns-66bc5c9577-zl8b9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c39f8c92a3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-" Mar 13 00:40:42.971788 containerd[1551]: 2026-03-13 00:40:42.820 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" Mar 13 00:40:42.971788 containerd[1551]: 2026-03-13 00:40:42.893 [INFO][4432] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" HandleID="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Workload="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.902 [INFO][4432] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" HandleID="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Workload="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-108-24", "pod":"coredns-66bc5c9577-zl8b9", "timestamp":"2026-03-13 00:40:42.893806394 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000298c60)} Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.902 [INFO][4432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.902 [INFO][4432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.902 [INFO][4432] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.910 [INFO][4432] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" host="172-236-108-24" Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.916 [INFO][4432] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.923 [INFO][4432] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.928 [INFO][4432] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:42.972178 containerd[1551]: 2026-03-13 00:40:42.931 [INFO][4432] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:42.972565 containerd[1551]: 2026-03-13 00:40:42.932 [INFO][4432] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" host="172-236-108-24" Mar 13 00:40:42.972565 containerd[1551]: 2026-03-13 00:40:42.934 [INFO][4432] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf Mar 13 00:40:42.972565 containerd[1551]: 2026-03-13 00:40:42.937 [INFO][4432] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" host="172-236-108-24" Mar 13 00:40:42.972565 containerd[1551]: 2026-03-13 00:40:42.948 [INFO][4432] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.3/26] block=192.168.51.0/26 handle="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" host="172-236-108-24" Mar 13 00:40:42.972565 containerd[1551]: 2026-03-13 00:40:42.948 [INFO][4432] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.3/26] handle="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" host="172-236-108-24" Mar 13 00:40:42.972565 containerd[1551]: 2026-03-13 00:40:42.949 [INFO][4432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:42.972565 containerd[1551]: 2026-03-13 00:40:42.949 [INFO][4432] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.3/26] IPv6=[] ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" HandleID="k8s-pod-network.6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Workload="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" Mar 13 00:40:42.973062 containerd[1551]: 2026-03-13 00:40:42.952 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75a2a026-4325-468c-8895-cbf23d722c33", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"coredns-66bc5c9577-zl8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c39f8c92a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:42.973062 containerd[1551]: 2026-03-13 00:40:42.952 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.3/32] ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" Mar 13 00:40:42.973062 containerd[1551]: 2026-03-13 00:40:42.952 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c39f8c92a3 ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" Mar 13 00:40:42.973062 containerd[1551]: 2026-03-13 00:40:42.958 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" Mar 13 00:40:42.973062 containerd[1551]: 2026-03-13 00:40:42.958 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75a2a026-4325-468c-8895-cbf23d722c33", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf", Pod:"coredns-66bc5c9577-zl8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c39f8c92a3", MAC:"ce:b1:ac:ae:62:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:42.973062 containerd[1551]: 2026-03-13 00:40:42.968 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" Namespace="kube-system" Pod="coredns-66bc5c9577-zl8b9" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--zl8b9-eth0" Mar 13 00:40:42.999264 containerd[1551]: time="2026-03-13T00:40:42.999225551Z" level=info msg="connecting to shim 6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf" address="unix:///run/containerd/s/321fc330eba390b139486a0c96734fd52b6ff3fd26dafaab4085a2603b99c8e9" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:43.037797 systemd[1]: Started cri-containerd-6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf.scope - libcontainer container 6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf. Mar 13 00:40:43.058492 systemd-networkd[1428]: calid2406bc615a: Link UP Mar 13 00:40:43.059539 systemd-networkd[1428]: calid2406bc615a: Gained carrier Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.842 [ERROR][4401] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.862 [INFO][4401] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0 calico-apiserver-76fbbd87df- calico-system 08c12ff1-629a-4b68-a14f-c98f5826ec71 874 0 2026-03-13 00:40:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76fbbd87df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-108-24 calico-apiserver-76fbbd87df-xwv65 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid2406bc615a [] [] }} ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.862 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.903 [INFO][4444] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" HandleID="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Workload="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.913 [INFO][4444] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" HandleID="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Workload="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-24", "pod":"calico-apiserver-76fbbd87df-xwv65", "timestamp":"2026-03-13 00:40:42.90338717 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188b00)} Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.913 [INFO][4444] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.948 [INFO][4444] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:42.948 [INFO][4444] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.013 [INFO][4444] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.020 [INFO][4444] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.025 [INFO][4444] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.029 [INFO][4444] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.037 [INFO][4444] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.037 [INFO][4444] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.039 [INFO][4444] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7 Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.043 [INFO][4444] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.050 [INFO][4444] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.4/26] block=192.168.51.0/26 handle="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.050 [INFO][4444] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.4/26] handle="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" host="172-236-108-24" Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.050 [INFO][4444] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:43.079422 containerd[1551]: 2026-03-13 00:40:43.050 [INFO][4444] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.4/26] IPv6=[] ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" HandleID="k8s-pod-network.613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Workload="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" Mar 13 00:40:43.081371 containerd[1551]: 2026-03-13 00:40:43.054 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0", GenerateName:"calico-apiserver-76fbbd87df-", Namespace:"calico-system", SelfLink:"", UID:"08c12ff1-629a-4b68-a14f-c98f5826ec71", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76fbbd87df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"calico-apiserver-76fbbd87df-xwv65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid2406bc615a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:43.081371 containerd[1551]: 2026-03-13 00:40:43.054 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.4/32] ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" Mar 13 00:40:43.081371 containerd[1551]: 2026-03-13 00:40:43.054 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2406bc615a ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" Mar 13 00:40:43.081371 containerd[1551]: 2026-03-13 00:40:43.060 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" Mar 13 00:40:43.081371 containerd[1551]: 2026-03-13 00:40:43.060 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0", GenerateName:"calico-apiserver-76fbbd87df-", Namespace:"calico-system", SelfLink:"", UID:"08c12ff1-629a-4b68-a14f-c98f5826ec71", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76fbbd87df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7", Pod:"calico-apiserver-76fbbd87df-xwv65", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid2406bc615a", MAC:"86:30:03:5a:5a:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:43.081371 containerd[1551]: 2026-03-13 00:40:43.075 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-xwv65" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--xwv65-eth0" Mar 13 00:40:43.107229 containerd[1551]: time="2026-03-13T00:40:43.107022952Z" level=info msg="connecting to shim 613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7" address="unix:///run/containerd/s/0ecd1e761ddfd004bcaa04c23450175e144cbe67be245ba91fd79ef161422e55" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:43.188985 containerd[1551]: time="2026-03-13T00:40:43.188478320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zl8b9,Uid:75a2a026-4325-468c-8895-cbf23d722c33,Namespace:kube-system,Attempt:0,} returns sandbox id \"6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf\"" Mar 13 00:40:43.189337 kubelet[2729]: E0313 00:40:43.189167 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:43.193963 containerd[1551]: time="2026-03-13T00:40:43.192776417Z" level=info msg="CreateContainer within sandbox \"6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:40:43.200645 systemd[1]: Started cri-containerd-613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7.scope - libcontainer container 613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7. Mar 13 00:40:43.205066 containerd[1551]: time="2026-03-13T00:40:43.204172356Z" level=info msg="Container a6ceb0bf990e1395e8033f56c536b29528d4de36513957a69974020cf0aca27b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:43.206194 systemd-networkd[1428]: cali890d5622bb3: Link UP Mar 13 00:40:43.207911 systemd-networkd[1428]: cali890d5622bb3: Gained carrier Mar 13 00:40:43.213018 containerd[1551]: time="2026-03-13T00:40:43.212989248Z" level=info msg="CreateContainer within sandbox \"6466dfa7606f68cb48afa243dec36471aa6071ab23482833e41ce4e7896537cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6ceb0bf990e1395e8033f56c536b29528d4de36513957a69974020cf0aca27b\"" Mar 13 00:40:43.214099 containerd[1551]: time="2026-03-13T00:40:43.213509874Z" level=info msg="StartContainer for \"a6ceb0bf990e1395e8033f56c536b29528d4de36513957a69974020cf0aca27b\"" Mar 13 00:40:43.214576 containerd[1551]: time="2026-03-13T00:40:43.214421093Z" level=info msg="connecting to shim a6ceb0bf990e1395e8033f56c536b29528d4de36513957a69974020cf0aca27b" address="unix:///run/containerd/s/321fc330eba390b139486a0c96734fd52b6ff3fd26dafaab4085a2603b99c8e9" protocol=ttrpc version=3 Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:42.840 [ERROR][4404] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:42.860 [INFO][4404] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0 calico-apiserver-76fbbd87df- calico-system a9a0c97c-89b4-4e9a-99bd-218cd065a879 873 0 2026-03-13 00:40:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76fbbd87df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-108-24 calico-apiserver-76fbbd87df-gh6db eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali890d5622bb3 [] [] }} ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:42.860 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:42.923 [INFO][4442] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" HandleID="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Workload="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:42.932 [INFO][4442] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" HandleID="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Workload="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-24", "pod":"calico-apiserver-76fbbd87df-gh6db", "timestamp":"2026-03-13 00:40:42.923416147 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:42.932 [INFO][4442] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.050 [INFO][4442] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.050 [INFO][4442] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.110 [INFO][4442] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.127 [INFO][4442] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.150 [INFO][4442] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.152 [INFO][4442] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.156 [INFO][4442] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.156 [INFO][4442] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.160 [INFO][4442] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38 Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.173 [INFO][4442] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.180 [INFO][4442] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.5/26] block=192.168.51.0/26 handle="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.180 [INFO][4442] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.5/26] handle="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" host="172-236-108-24" Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.180 [INFO][4442] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:43.234114 containerd[1551]: 2026-03-13 00:40:43.180 [INFO][4442] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.5/26] IPv6=[] ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" HandleID="k8s-pod-network.b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Workload="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" Mar 13 00:40:43.236779 containerd[1551]: 2026-03-13 00:40:43.200 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0", GenerateName:"calico-apiserver-76fbbd87df-", Namespace:"calico-system", SelfLink:"", UID:"a9a0c97c-89b4-4e9a-99bd-218cd065a879", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76fbbd87df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"calico-apiserver-76fbbd87df-gh6db", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali890d5622bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:43.236779 containerd[1551]: 2026-03-13 00:40:43.200 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.5/32] ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" Mar 13 00:40:43.236779 containerd[1551]: 2026-03-13 00:40:43.200 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali890d5622bb3 ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" Mar 13 00:40:43.236779 containerd[1551]: 2026-03-13 00:40:43.207 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" Mar 13 00:40:43.236779 containerd[1551]: 2026-03-13 00:40:43.209 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0", GenerateName:"calico-apiserver-76fbbd87df-", Namespace:"calico-system", SelfLink:"", UID:"a9a0c97c-89b4-4e9a-99bd-218cd065a879", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76fbbd87df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38", Pod:"calico-apiserver-76fbbd87df-gh6db", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali890d5622bb3", MAC:"1a:05:78:b2:7d:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:43.236779 containerd[1551]: 2026-03-13 00:40:43.228 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" Namespace="calico-system" Pod="calico-apiserver-76fbbd87df-gh6db" WorkloadEndpoint="172--236--108--24-k8s-calico--apiserver--76fbbd87df--gh6db-eth0" Mar 13 00:40:43.253826 systemd[1]: Started cri-containerd-a6ceb0bf990e1395e8033f56c536b29528d4de36513957a69974020cf0aca27b.scope - libcontainer container a6ceb0bf990e1395e8033f56c536b29528d4de36513957a69974020cf0aca27b. Mar 13 00:40:43.276312 containerd[1551]: time="2026-03-13T00:40:43.276261059Z" level=info msg="connecting to shim b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38" address="unix:///run/containerd/s/753abbdc50877b5f0ef830c4d90c9d69e7c688cb8130541ec54f957deeaf542d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:43.332318 systemd[1]: Started cri-containerd-b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38.scope - libcontainer container b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38. Mar 13 00:40:43.345994 containerd[1551]: time="2026-03-13T00:40:43.345967637Z" level=info msg="StartContainer for \"a6ceb0bf990e1395e8033f56c536b29528d4de36513957a69974020cf0aca27b\" returns successfully" Mar 13 00:40:43.399339 containerd[1551]: time="2026-03-13T00:40:43.399267934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-xwv65,Uid:08c12ff1-629a-4b68-a14f-c98f5826ec71,Namespace:calico-system,Attempt:0,} returns sandbox id \"613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7\"" Mar 13 00:40:43.402393 containerd[1551]: time="2026-03-13T00:40:43.402071740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:40:43.496233 containerd[1551]: time="2026-03-13T00:40:43.496178666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fbbd87df-gh6db,Uid:a9a0c97c-89b4-4e9a-99bd-218cd065a879,Namespace:calico-system,Attempt:0,} returns sandbox id \"b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38\"" Mar 13 00:40:43.759356 containerd[1551]: time="2026-03-13T00:40:43.759228114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9499,Uid:7c42b50a-6791-4fbf-bf4c-a625fe51988b,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:43.911708 systemd-networkd[1428]: cali07cb43decd0: Link UP Mar 13 00:40:43.912540 systemd-networkd[1428]: cali07cb43decd0: Gained carrier Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.805 [ERROR][4673] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.824 [INFO][4673] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0 goldmane-cccfbd5cf- calico-system 7c42b50a-6791-4fbf-bf4c-a625fe51988b 876 0 2026-03-13 00:40:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-236-108-24 goldmane-cccfbd5cf-w9499 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali07cb43decd0 [] [] }} ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.824 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.854 [INFO][4686] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" HandleID="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Workload="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.861 [INFO][4686] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" HandleID="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Workload="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd330), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-24", "pod":"goldmane-cccfbd5cf-w9499", "timestamp":"2026-03-13 00:40:43.854749571 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000580f20)} Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.861 [INFO][4686] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.861 [INFO][4686] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.861 [INFO][4686] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.863 [INFO][4686] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.883 [INFO][4686] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.888 [INFO][4686] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.890 [INFO][4686] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.893 [INFO][4686] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.893 [INFO][4686] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.895 [INFO][4686] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.898 [INFO][4686] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.905 [INFO][4686] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.6/26] block=192.168.51.0/26 handle="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.905 [INFO][4686] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.6/26] handle="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" host="172-236-108-24" Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.905 [INFO][4686] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:43.927483 containerd[1551]: 2026-03-13 00:40:43.905 [INFO][4686] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.6/26] IPv6=[] ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" HandleID="k8s-pod-network.03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Workload="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" Mar 13 00:40:43.928769 containerd[1551]: 2026-03-13 00:40:43.908 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"7c42b50a-6791-4fbf-bf4c-a625fe51988b", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"goldmane-cccfbd5cf-w9499", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07cb43decd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:43.928769 containerd[1551]: 2026-03-13 00:40:43.908 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.6/32] ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" Mar 13 00:40:43.928769 containerd[1551]: 2026-03-13 00:40:43.908 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07cb43decd0 ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" Mar 13 00:40:43.928769 containerd[1551]: 2026-03-13 00:40:43.913 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" Mar 13 00:40:43.928769 containerd[1551]: 2026-03-13 00:40:43.913 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"7c42b50a-6791-4fbf-bf4c-a625fe51988b", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e", Pod:"goldmane-cccfbd5cf-w9499", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07cb43decd0", MAC:"0e:52:e2:45:db:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:43.928769 containerd[1551]: 2026-03-13 00:40:43.924 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w9499" WorkloadEndpoint="172--236--108--24-k8s-goldmane--cccfbd5cf--w9499-eth0" Mar 13 00:40:43.948089 containerd[1551]: time="2026-03-13T00:40:43.947846643Z" level=info msg="connecting to shim 03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e" address="unix:///run/containerd/s/b75ba104ca25cb7755c08cccd989fa0a2d9f966fe475ad0b7397fbe5ef1dbb6d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:43.982882 systemd[1]: Started cri-containerd-03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e.scope - libcontainer container 03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e. Mar 13 00:40:44.008337 kubelet[2729]: E0313 00:40:44.008313 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:44.028144 kubelet[2729]: I0313 00:40:44.024739 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zl8b9" podStartSLOduration=35.024723043 podStartE2EDuration="35.024723043s" podCreationTimestamp="2026-03-13 00:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:44.022846107 +0000 UTC m=+41.358399075" watchObservedRunningTime="2026-03-13 00:40:44.024723043 +0000 UTC m=+41.360275991" Mar 13 00:40:44.078200 containerd[1551]: time="2026-03-13T00:40:44.077888758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w9499,Uid:7c42b50a-6791-4fbf-bf4c-a625fe51988b,Namespace:calico-system,Attempt:0,} returns sandbox id \"03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e\"" Mar 13 00:40:44.409864 kubelet[2729]: I0313 00:40:44.409808 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:44.410705 kubelet[2729]: E0313 00:40:44.410365 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:44.728009 systemd-networkd[1428]: calid2406bc615a: Gained IPv6LL Mar 13 00:40:44.792965 systemd-networkd[1428]: cali6c39f8c92a3: Gained IPv6LL Mar 13 00:40:45.012919 kubelet[2729]: E0313 00:40:45.012345 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:45.013774 kubelet[2729]: E0313 00:40:45.012560 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:45.175757 systemd-networkd[1428]: cali890d5622bb3: Gained IPv6LL Mar 13 00:40:45.303808 systemd-networkd[1428]: cali07cb43decd0: Gained IPv6LL Mar 13 00:40:45.505276 systemd-networkd[1428]: vxlan.calico: Link UP Mar 13 00:40:45.508924 systemd-networkd[1428]: vxlan.calico: Gained carrier Mar 13 00:40:46.014243 kubelet[2729]: E0313 00:40:46.014205 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:46.352832 containerd[1551]: time="2026-03-13T00:40:46.352785139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:46.353908 containerd[1551]: time="2026-03-13T00:40:46.353889033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:40:46.354491 containerd[1551]: time="2026-03-13T00:40:46.354454118Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:46.357720 containerd[1551]: time="2026-03-13T00:40:46.356823625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:46.357810 containerd[1551]: time="2026-03-13T00:40:46.357784895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.955254936s" Mar 13 00:40:46.357872 containerd[1551]: time="2026-03-13T00:40:46.357814022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:40:46.359365 containerd[1551]: time="2026-03-13T00:40:46.358727800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:40:46.362455 containerd[1551]: time="2026-03-13T00:40:46.362429024Z" level=info msg="CreateContainer within sandbox \"613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:40:46.375316 containerd[1551]: time="2026-03-13T00:40:46.374784929Z" level=info msg="Container 604ec3c72a991b16fd91bcd229b1e5afd533e444e4d8b006caa74075b712f279: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:46.379477 containerd[1551]: time="2026-03-13T00:40:46.379447173Z" level=info msg="CreateContainer within sandbox \"613b467e0ae7946c2f8a396dfdcc54064b9a1e055f45dfa64c81b5f3300a00b7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"604ec3c72a991b16fd91bcd229b1e5afd533e444e4d8b006caa74075b712f279\"" Mar 13 00:40:46.380639 containerd[1551]: time="2026-03-13T00:40:46.379855961Z" level=info msg="StartContainer for \"604ec3c72a991b16fd91bcd229b1e5afd533e444e4d8b006caa74075b712f279\"" Mar 13 00:40:46.380806 containerd[1551]: time="2026-03-13T00:40:46.380787414Z" level=info msg="connecting to shim 604ec3c72a991b16fd91bcd229b1e5afd533e444e4d8b006caa74075b712f279" address="unix:///run/containerd/s/0ecd1e761ddfd004bcaa04c23450175e144cbe67be245ba91fd79ef161422e55" protocol=ttrpc version=3 Mar 13 00:40:46.405856 systemd[1]: Started cri-containerd-604ec3c72a991b16fd91bcd229b1e5afd533e444e4d8b006caa74075b712f279.scope - libcontainer container 604ec3c72a991b16fd91bcd229b1e5afd533e444e4d8b006caa74075b712f279. Mar 13 00:40:46.460196 containerd[1551]: time="2026-03-13T00:40:46.460133883Z" level=info msg="StartContainer for \"604ec3c72a991b16fd91bcd229b1e5afd533e444e4d8b006caa74075b712f279\" returns successfully" Mar 13 00:40:46.527337 containerd[1551]: time="2026-03-13T00:40:46.527268523Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:46.528541 containerd[1551]: time="2026-03-13T00:40:46.528499677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:40:46.531387 containerd[1551]: time="2026-03-13T00:40:46.531178588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 172.427272ms" Mar 13 00:40:46.531387 containerd[1551]: time="2026-03-13T00:40:46.531236472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:40:46.533723 containerd[1551]: time="2026-03-13T00:40:46.533703592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:40:46.536501 containerd[1551]: time="2026-03-13T00:40:46.536446007Z" level=info msg="CreateContainer within sandbox \"b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:40:46.544743 containerd[1551]: time="2026-03-13T00:40:46.544718075Z" level=info msg="Container 6cc64edc6265b960b5b8620fc4fd7f0d22180c4b78a9c1f4a1ca0363811c829b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:46.556351 containerd[1551]: time="2026-03-13T00:40:46.556318288Z" level=info msg="CreateContainer within sandbox \"b624428b040404b0adff217705c7557223b1ec8c3a7a64b1d07032c98da6df38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6cc64edc6265b960b5b8620fc4fd7f0d22180c4b78a9c1f4a1ca0363811c829b\"" Mar 13 00:40:46.556836 containerd[1551]: time="2026-03-13T00:40:46.556764785Z" level=info msg="StartContainer for \"6cc64edc6265b960b5b8620fc4fd7f0d22180c4b78a9c1f4a1ca0363811c829b\"" Mar 13 00:40:46.559211 containerd[1551]: time="2026-03-13T00:40:46.559188384Z" level=info msg="connecting to shim 6cc64edc6265b960b5b8620fc4fd7f0d22180c4b78a9c1f4a1ca0363811c829b" address="unix:///run/containerd/s/753abbdc50877b5f0ef830c4d90c9d69e7c688cb8130541ec54f957deeaf542d" protocol=ttrpc version=3 Mar 13 00:40:46.595786 systemd[1]: Started cri-containerd-6cc64edc6265b960b5b8620fc4fd7f0d22180c4b78a9c1f4a1ca0363811c829b.scope - libcontainer container 6cc64edc6265b960b5b8620fc4fd7f0d22180c4b78a9c1f4a1ca0363811c829b. Mar 13 00:40:46.686346 containerd[1551]: time="2026-03-13T00:40:46.684686178Z" level=info msg="StartContainer for \"6cc64edc6265b960b5b8620fc4fd7f0d22180c4b78a9c1f4a1ca0363811c829b\" returns successfully" Mar 13 00:40:46.771732 containerd[1551]: time="2026-03-13T00:40:46.771698570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d445c5d86-vjpzq,Uid:cdd33859-30a2-45e0-8214-d92ea489e090,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:46.943787 systemd-networkd[1428]: calicdeac8b23db: Link UP Mar 13 00:40:46.945773 systemd-networkd[1428]: calicdeac8b23db: Gained carrier Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.851 [INFO][4979] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0 calico-kube-controllers-6d445c5d86- calico-system cdd33859-30a2-45e0-8214-d92ea489e090 872 0 2026-03-13 00:40:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d445c5d86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-108-24 calico-kube-controllers-6d445c5d86-vjpzq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicdeac8b23db [] [] }} ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.851 [INFO][4979] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.887 [INFO][4988] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" HandleID="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Workload="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.900 [INFO][4988] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" HandleID="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Workload="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002771d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-24", "pod":"calico-kube-controllers-6d445c5d86-vjpzq", "timestamp":"2026-03-13 00:40:46.887463636 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e0f20)} Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.900 [INFO][4988] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.900 [INFO][4988] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.900 [INFO][4988] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.905 [INFO][4988] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.912 [INFO][4988] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.920 [INFO][4988] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.921 [INFO][4988] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.923 [INFO][4988] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.923 [INFO][4988] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.924 [INFO][4988] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.928 [INFO][4988] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.934 [INFO][4988] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.7/26] block=192.168.51.0/26 handle="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.934 [INFO][4988] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.7/26] handle="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" host="172-236-108-24" Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.934 [INFO][4988] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:46.962335 containerd[1551]: 2026-03-13 00:40:46.934 [INFO][4988] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.7/26] IPv6=[] ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" HandleID="k8s-pod-network.0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Workload="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" Mar 13 00:40:46.962833 containerd[1551]: 2026-03-13 00:40:46.937 [INFO][4979] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0", GenerateName:"calico-kube-controllers-6d445c5d86-", Namespace:"calico-system", SelfLink:"", UID:"cdd33859-30a2-45e0-8214-d92ea489e090", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d445c5d86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"calico-kube-controllers-6d445c5d86-vjpzq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdeac8b23db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:46.962833 containerd[1551]: 2026-03-13 00:40:46.937 [INFO][4979] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.7/32] ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" Mar 13 00:40:46.962833 containerd[1551]: 2026-03-13 00:40:46.937 [INFO][4979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdeac8b23db ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" Mar 13 00:40:46.962833 containerd[1551]: 2026-03-13 00:40:46.943 [INFO][4979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" Mar 13 00:40:46.962833 containerd[1551]: 2026-03-13 00:40:46.945 [INFO][4979] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0", GenerateName:"calico-kube-controllers-6d445c5d86-", Namespace:"calico-system", SelfLink:"", UID:"cdd33859-30a2-45e0-8214-d92ea489e090", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d445c5d86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee", Pod:"calico-kube-controllers-6d445c5d86-vjpzq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdeac8b23db", MAC:"2a:9a:2b:2e:e1:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:46.962833 containerd[1551]: 2026-03-13 00:40:46.957 [INFO][4979] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" Namespace="calico-system" Pod="calico-kube-controllers-6d445c5d86-vjpzq" WorkloadEndpoint="172--236--108--24-k8s-calico--kube--controllers--6d445c5d86--vjpzq-eth0" Mar 13 00:40:46.992028 containerd[1551]: time="2026-03-13T00:40:46.991948666Z" level=info msg="connecting to shim 0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee" address="unix:///run/containerd/s/d364cd9b520edfb4f19f1c9e63c2353d31839a5a4d8296ca729af8ae5ad28f05" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:47.068921 systemd[1]: Started cri-containerd-0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee.scope - libcontainer container 0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee. Mar 13 00:40:47.075848 kubelet[2729]: I0313 00:40:47.075555 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-76fbbd87df-gh6db" podStartSLOduration=24.04110528 podStartE2EDuration="27.075424425s" podCreationTimestamp="2026-03-13 00:40:20 +0000 UTC" firstStartedPulling="2026-03-13 00:40:43.497672808 +0000 UTC m=+40.833225756" lastFinishedPulling="2026-03-13 00:40:46.531991943 +0000 UTC m=+43.867544901" observedRunningTime="2026-03-13 00:40:47.074697186 +0000 UTC m=+44.410250154" watchObservedRunningTime="2026-03-13 00:40:47.075424425 +0000 UTC m=+44.410977373" Mar 13 00:40:47.077272 kubelet[2729]: I0313 00:40:47.076941 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-76fbbd87df-xwv65" podStartSLOduration=25.120009549 podStartE2EDuration="28.076933796s" podCreationTimestamp="2026-03-13 00:40:19 +0000 UTC" firstStartedPulling="2026-03-13 00:40:43.401607647 +0000 UTC m=+40.737160595" lastFinishedPulling="2026-03-13 00:40:46.358531884 +0000 UTC m=+43.694084842" observedRunningTime="2026-03-13 00:40:47.047752677 +0000 UTC m=+44.383305645" watchObservedRunningTime="2026-03-13 00:40:47.076933796 +0000 UTC m=+44.412486744" Mar 13 00:40:47.164532 containerd[1551]: time="2026-03-13T00:40:47.164384751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d445c5d86-vjpzq,Uid:cdd33859-30a2-45e0-8214-d92ea489e090,Namespace:calico-system,Attempt:0,} returns sandbox id \"0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee\"" Mar 13 00:40:47.480915 systemd-networkd[1428]: vxlan.calico: Gained IPv6LL Mar 13 00:40:47.760280 kubelet[2729]: E0313 00:40:47.760168 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:47.762031 containerd[1551]: time="2026-03-13T00:40:47.762002016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-85w7t,Uid:db812fdd-5246-4625-be37-b6be535eb373,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:47.907326 systemd-networkd[1428]: cali8bdf6658db3: Link UP Mar 13 00:40:47.908587 systemd-networkd[1428]: cali8bdf6658db3: Gained carrier Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.815 [INFO][5070] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0 coredns-66bc5c9577- kube-system db812fdd-5246-4625-be37-b6be535eb373 865 0 2026-03-13 00:40:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-108-24 coredns-66bc5c9577-85w7t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8bdf6658db3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.815 [INFO][5070] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.851 [INFO][5082] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" HandleID="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Workload="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.860 [INFO][5082] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" HandleID="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Workload="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277490), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-108-24", "pod":"coredns-66bc5c9577-85w7t", "timestamp":"2026-03-13 00:40:47.851095903 +0000 UTC"}, Hostname:"172-236-108-24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001fadc0)} Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.860 [INFO][5082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.860 [INFO][5082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.860 [INFO][5082] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-24' Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.862 [INFO][5082] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.867 [INFO][5082] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.872 [INFO][5082] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.874 [INFO][5082] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.878 [INFO][5082] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.878 [INFO][5082] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.880 [INFO][5082] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.884 [INFO][5082] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.893 [INFO][5082] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.8/26] block=192.168.51.0/26 handle="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.894 [INFO][5082] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.8/26] handle="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" host="172-236-108-24" Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.894 [INFO][5082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:47.941596 containerd[1551]: 2026-03-13 00:40:47.894 [INFO][5082] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.8/26] IPv6=[] ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" HandleID="k8s-pod-network.344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Workload="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" Mar 13 00:40:47.944253 containerd[1551]: 2026-03-13 00:40:47.898 [INFO][5070] cni-plugin/k8s.go 418: Populated endpoint ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db812fdd-5246-4625-be37-b6be535eb373", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"", Pod:"coredns-66bc5c9577-85w7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bdf6658db3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:47.944253 containerd[1551]: 2026-03-13 00:40:47.898 [INFO][5070] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.8/32] ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" Mar 13 00:40:47.944253 containerd[1551]: 2026-03-13 00:40:47.898 [INFO][5070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bdf6658db3 ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" Mar 13 00:40:47.944253 containerd[1551]: 2026-03-13 00:40:47.910 [INFO][5070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" Mar 13 00:40:47.944253 containerd[1551]: 2026-03-13 00:40:47.911 [INFO][5070] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db812fdd-5246-4625-be37-b6be535eb373", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-24", ContainerID:"344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a", Pod:"coredns-66bc5c9577-85w7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bdf6658db3", MAC:"e2:fa:f3:ec:dc:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:47.944253 containerd[1551]: 2026-03-13 00:40:47.933 [INFO][5070] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" Namespace="kube-system" Pod="coredns-66bc5c9577-85w7t" WorkloadEndpoint="172--236--108--24-k8s-coredns--66bc5c9577--85w7t-eth0" Mar 13 00:40:48.017299 containerd[1551]: time="2026-03-13T00:40:48.016789841Z" level=info msg="connecting to shim 344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a" address="unix:///run/containerd/s/ba24eb16d88369aa3b0fcfc1073e500addc116ebcc83402fd37a9fe1dc58e68b" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:48.047763 kubelet[2729]: I0313 00:40:48.047719 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:48.077764 systemd[1]: Started cri-containerd-344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a.scope - libcontainer container 344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a. Mar 13 00:40:48.181898 containerd[1551]: time="2026-03-13T00:40:48.181527224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-85w7t,Uid:db812fdd-5246-4625-be37-b6be535eb373,Namespace:kube-system,Attempt:0,} returns sandbox id \"344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a\"" Mar 13 00:40:48.188252 kubelet[2729]: E0313 00:40:48.188219 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:48.201190 containerd[1551]: time="2026-03-13T00:40:48.198752246Z" level=info msg="CreateContainer within sandbox \"344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:40:48.218372 containerd[1551]: time="2026-03-13T00:40:48.218342983Z" level=info msg="Container 4fe4f82e6e8da5f251a4ed11ca90d8e12e1128d442bb7f847bb4bad344cdaeae: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:48.227813 containerd[1551]: time="2026-03-13T00:40:48.227789927Z" level=info msg="CreateContainer within sandbox \"344998ca2d50eaae6d60cd2b7233f325cfbdc5903ef7374cc482cb0135b87e7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fe4f82e6e8da5f251a4ed11ca90d8e12e1128d442bb7f847bb4bad344cdaeae\"" Mar 13 00:40:48.229520 containerd[1551]: time="2026-03-13T00:40:48.229469957Z" level=info msg="StartContainer for \"4fe4f82e6e8da5f251a4ed11ca90d8e12e1128d442bb7f847bb4bad344cdaeae\"" Mar 13 00:40:48.230883 containerd[1551]: time="2026-03-13T00:40:48.230818321Z" level=info msg="connecting to shim 4fe4f82e6e8da5f251a4ed11ca90d8e12e1128d442bb7f847bb4bad344cdaeae" address="unix:///run/containerd/s/ba24eb16d88369aa3b0fcfc1073e500addc116ebcc83402fd37a9fe1dc58e68b" protocol=ttrpc version=3 Mar 13 00:40:48.247780 systemd-networkd[1428]: calicdeac8b23db: Gained IPv6LL Mar 13 00:40:48.277985 systemd[1]: Started cri-containerd-4fe4f82e6e8da5f251a4ed11ca90d8e12e1128d442bb7f847bb4bad344cdaeae.scope - libcontainer container 4fe4f82e6e8da5f251a4ed11ca90d8e12e1128d442bb7f847bb4bad344cdaeae. Mar 13 00:40:48.338728 containerd[1551]: time="2026-03-13T00:40:48.338680773Z" level=info msg="StartContainer for \"4fe4f82e6e8da5f251a4ed11ca90d8e12e1128d442bb7f847bb4bad344cdaeae\" returns successfully" Mar 13 00:40:48.946527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528040030.mount: Deactivated successfully. Mar 13 00:40:49.053873 kubelet[2729]: I0313 00:40:49.053832 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:49.054443 kubelet[2729]: E0313 00:40:49.054367 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:49.072424 kubelet[2729]: I0313 00:40:49.072250 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-85w7t" podStartSLOduration=40.072239113 podStartE2EDuration="40.072239113s" podCreationTimestamp="2026-03-13 00:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:49.070289204 +0000 UTC m=+46.405842162" watchObservedRunningTime="2026-03-13 00:40:49.072239113 +0000 UTC m=+46.407792061" Mar 13 00:40:49.547642 containerd[1551]: time="2026-03-13T00:40:49.546840834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:40:49.548467 containerd[1551]: time="2026-03-13T00:40:49.548447198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:49.551579 containerd[1551]: time="2026-03-13T00:40:49.551505401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.017623077s" Mar 13 00:40:49.551658 containerd[1551]: time="2026-03-13T00:40:49.551579947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:40:49.552127 containerd[1551]: time="2026-03-13T00:40:49.552106843Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:49.553734 containerd[1551]: time="2026-03-13T00:40:49.552892185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:49.554596 containerd[1551]: time="2026-03-13T00:40:49.554576376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:40:49.556743 containerd[1551]: time="2026-03-13T00:40:49.556722288Z" level=info msg="CreateContainer within sandbox \"03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:40:49.564809 containerd[1551]: time="2026-03-13T00:40:49.564786272Z" level=info msg="Container cfaa0f3b622d27c29f34116d7a5b7140b238f12c9829e64a96a3c6d1db7e09e3: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:49.575376 containerd[1551]: time="2026-03-13T00:40:49.575290224Z" level=info msg="CreateContainer within sandbox \"03bc7b1cad57c6aed84352fa6202e81e03231a75c2df9ff6babe68ba311f555e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cfaa0f3b622d27c29f34116d7a5b7140b238f12c9829e64a96a3c6d1db7e09e3\"" Mar 13 00:40:49.576817 containerd[1551]: time="2026-03-13T00:40:49.576773779Z" level=info msg="StartContainer for \"cfaa0f3b622d27c29f34116d7a5b7140b238f12c9829e64a96a3c6d1db7e09e3\"" Mar 13 00:40:49.578144 containerd[1551]: time="2026-03-13T00:40:49.578085419Z" level=info msg="connecting to shim cfaa0f3b622d27c29f34116d7a5b7140b238f12c9829e64a96a3c6d1db7e09e3" address="unix:///run/containerd/s/b75ba104ca25cb7755c08cccd989fa0a2d9f966fe475ad0b7397fbe5ef1dbb6d" protocol=ttrpc version=3 Mar 13 00:40:49.613745 systemd[1]: Started cri-containerd-cfaa0f3b622d27c29f34116d7a5b7140b238f12c9829e64a96a3c6d1db7e09e3.scope - libcontainer container cfaa0f3b622d27c29f34116d7a5b7140b238f12c9829e64a96a3c6d1db7e09e3. Mar 13 00:40:49.667603 containerd[1551]: time="2026-03-13T00:40:49.667561012Z" level=info msg="StartContainer for \"cfaa0f3b622d27c29f34116d7a5b7140b238f12c9829e64a96a3c6d1db7e09e3\" returns successfully" Mar 13 00:40:49.847929 systemd-networkd[1428]: cali8bdf6658db3: Gained IPv6LL Mar 13 00:40:50.059458 kubelet[2729]: E0313 00:40:50.058059 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:50.093980 kubelet[2729]: I0313 00:40:50.093917 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-w9499" podStartSLOduration=24.624039874 podStartE2EDuration="30.093902809s" podCreationTimestamp="2026-03-13 00:40:20 +0000 UTC" firstStartedPulling="2026-03-13 00:40:44.083273585 +0000 UTC m=+41.418826543" lastFinishedPulling="2026-03-13 00:40:49.55313652 +0000 UTC m=+46.888689478" observedRunningTime="2026-03-13 00:40:50.078042508 +0000 UTC m=+47.413595466" watchObservedRunningTime="2026-03-13 00:40:50.093902809 +0000 UTC m=+47.429455757" Mar 13 00:40:51.066703 kubelet[2729]: E0313 00:40:51.066603 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:40:51.455504 containerd[1551]: time="2026-03-13T00:40:51.455239509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:51.456395 containerd[1551]: time="2026-03-13T00:40:51.455964750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:40:51.457656 containerd[1551]: time="2026-03-13T00:40:51.456900966Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:51.458632 containerd[1551]: time="2026-03-13T00:40:51.458566484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:51.459200 containerd[1551]: time="2026-03-13T00:40:51.459159189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.903995142s" Mar 13 00:40:51.459200 containerd[1551]: time="2026-03-13T00:40:51.459199227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:40:51.476633 containerd[1551]: time="2026-03-13T00:40:51.476585543Z" level=info msg="CreateContainer within sandbox \"0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:40:51.484637 containerd[1551]: time="2026-03-13T00:40:51.480646533Z" level=info msg="Container b443c2b1e357483a2527deb6a2dc94416c86cefa1a99ad618d77163d05ac3902: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:51.490550 containerd[1551]: time="2026-03-13T00:40:51.490512297Z" level=info msg="CreateContainer within sandbox \"0944bcb906243b4ae6b489829379466fb6d3456d6adfdf8337391f856447beee\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b443c2b1e357483a2527deb6a2dc94416c86cefa1a99ad618d77163d05ac3902\"" Mar 13 00:40:51.492603 containerd[1551]: time="2026-03-13T00:40:51.491376238Z" level=info msg="StartContainer for \"b443c2b1e357483a2527deb6a2dc94416c86cefa1a99ad618d77163d05ac3902\"" Mar 13 00:40:51.492603 containerd[1551]: time="2026-03-13T00:40:51.492281447Z" level=info msg="connecting to shim b443c2b1e357483a2527deb6a2dc94416c86cefa1a99ad618d77163d05ac3902" address="unix:///run/containerd/s/d364cd9b520edfb4f19f1c9e63c2353d31839a5a4d8296ca729af8ae5ad28f05" protocol=ttrpc version=3 Mar 13 00:40:51.514858 systemd[1]: Started cri-containerd-b443c2b1e357483a2527deb6a2dc94416c86cefa1a99ad618d77163d05ac3902.scope - libcontainer container b443c2b1e357483a2527deb6a2dc94416c86cefa1a99ad618d77163d05ac3902. Mar 13 00:40:51.579717 containerd[1551]: time="2026-03-13T00:40:51.579678289Z" level=info msg="StartContainer for \"b443c2b1e357483a2527deb6a2dc94416c86cefa1a99ad618d77163d05ac3902\" returns successfully" Mar 13 00:40:52.106524 kubelet[2729]: I0313 00:40:52.104744 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d445c5d86-vjpzq" podStartSLOduration=27.812846177 podStartE2EDuration="32.104729832s" podCreationTimestamp="2026-03-13 00:40:20 +0000 UTC" firstStartedPulling="2026-03-13 00:40:47.168171951 +0000 UTC m=+44.503724899" lastFinishedPulling="2026-03-13 00:40:51.460055606 +0000 UTC m=+48.795608554" observedRunningTime="2026-03-13 00:40:52.104444724 +0000 UTC m=+49.439997692" watchObservedRunningTime="2026-03-13 00:40:52.104729832 +0000 UTC m=+49.440282800" Mar 13 00:41:17.295857 kubelet[2729]: I0313 00:41:17.295754 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:41:18.757729 kubelet[2729]: E0313 00:41:18.757583 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:41:22.757643 kubelet[2729]: E0313 00:41:22.757186 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:41:38.758646 kubelet[2729]: E0313 00:41:38.758050 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:41:38.759930 kubelet[2729]: E0313 00:41:38.759889 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:42:00.760812 kubelet[2729]: E0313 00:42:00.760667 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:42:01.757220 kubelet[2729]: E0313 00:42:01.757188 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:42:12.759918 kubelet[2729]: E0313 00:42:12.759883 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:42:19.757476 kubelet[2729]: E0313 00:42:19.757445 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:42:22.156551 systemd[1]: Started sshd@8-172.236.108.24:22-186.96.145.241:54788.service - OpenSSH per-connection server daemon (186.96.145.241:54788). Mar 13 00:42:22.346991 sshd[5782]: Invalid user lct from 186.96.145.241 port 54788 Mar 13 00:42:22.397339 sshd[5782]: Connection closed by invalid user lct 186.96.145.241 port 54788 [preauth] Mar 13 00:42:22.400772 systemd[1]: sshd@8-172.236.108.24:22-186.96.145.241:54788.service: Deactivated successfully. Mar 13 00:42:42.757924 kubelet[2729]: E0313 00:42:42.757310 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:42:49.774837 systemd[1]: Started sshd@9-172.236.108.24:22-68.220.241.50:57584.service - OpenSSH per-connection server daemon (68.220.241.50:57584). Mar 13 00:42:49.928738 sshd[5868]: Accepted publickey for core from 68.220.241.50 port 57584 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:42:49.931032 sshd-session[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:49.937436 systemd-logind[1523]: New session 8 of user core. Mar 13 00:42:49.943756 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:42:50.067394 sshd[5874]: Connection closed by 68.220.241.50 port 57584 Mar 13 00:42:50.068833 sshd-session[5868]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:50.073115 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:42:50.073304 systemd[1]: sshd@9-172.236.108.24:22-68.220.241.50:57584.service: Deactivated successfully. Mar 13 00:42:50.075506 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:42:50.077560 systemd-logind[1523]: Removed session 8. Mar 13 00:42:51.757586 kubelet[2729]: E0313 00:42:51.757550 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:42:55.110447 systemd[1]: Started sshd@10-172.236.108.24:22-68.220.241.50:54482.service - OpenSSH per-connection server daemon (68.220.241.50:54482). Mar 13 00:42:55.272952 sshd[5932]: Accepted publickey for core from 68.220.241.50 port 54482 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:42:55.274472 sshd-session[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:42:55.282293 systemd-logind[1523]: New session 9 of user core. Mar 13 00:42:55.285746 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:42:55.404719 sshd[5935]: Connection closed by 68.220.241.50 port 54482 Mar 13 00:42:55.404326 sshd-session[5932]: pam_unix(sshd:session): session closed for user core Mar 13 00:42:55.408974 systemd[1]: sshd@10-172.236.108.24:22-68.220.241.50:54482.service: Deactivated successfully. Mar 13 00:42:55.411588 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:42:55.412456 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:42:55.414038 systemd-logind[1523]: Removed session 9. Mar 13 00:43:00.439420 systemd[1]: Started sshd@11-172.236.108.24:22-68.220.241.50:54488.service - OpenSSH per-connection server daemon (68.220.241.50:54488). Mar 13 00:43:00.592660 sshd[5948]: Accepted publickey for core from 68.220.241.50 port 54488 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:00.594404 sshd-session[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:00.601603 systemd-logind[1523]: New session 10 of user core. Mar 13 00:43:00.606759 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:43:00.727313 sshd[5951]: Connection closed by 68.220.241.50 port 54488 Mar 13 00:43:00.727866 sshd-session[5948]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:00.732905 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:43:00.734044 systemd[1]: sshd@11-172.236.108.24:22-68.220.241.50:54488.service: Deactivated successfully. Mar 13 00:43:00.736922 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:43:00.738500 systemd-logind[1523]: Removed session 10. Mar 13 00:43:03.757756 kubelet[2729]: E0313 00:43:03.757723 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:43:04.757644 kubelet[2729]: E0313 00:43:04.757517 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:43:05.758298 systemd[1]: Started sshd@12-172.236.108.24:22-68.220.241.50:48468.service - OpenSSH per-connection server daemon (68.220.241.50:48468). Mar 13 00:43:05.904208 sshd[5991]: Accepted publickey for core from 68.220.241.50 port 48468 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:05.906202 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:05.911562 systemd-logind[1523]: New session 11 of user core. Mar 13 00:43:05.915744 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:43:06.030016 sshd[5994]: Connection closed by 68.220.241.50 port 48468 Mar 13 00:43:06.030893 sshd-session[5991]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:06.036162 systemd[1]: sshd@12-172.236.108.24:22-68.220.241.50:48468.service: Deactivated successfully. Mar 13 00:43:06.039519 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:43:06.040417 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:43:06.042253 systemd-logind[1523]: Removed session 11. Mar 13 00:43:11.070529 systemd[1]: Started sshd@13-172.236.108.24:22-68.220.241.50:48470.service - OpenSSH per-connection server daemon (68.220.241.50:48470). Mar 13 00:43:11.216534 sshd[6009]: Accepted publickey for core from 68.220.241.50 port 48470 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:11.218260 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:11.223244 systemd-logind[1523]: New session 12 of user core. Mar 13 00:43:11.227730 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:43:11.347638 sshd[6012]: Connection closed by 68.220.241.50 port 48470 Mar 13 00:43:11.349400 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:11.353231 systemd[1]: sshd@13-172.236.108.24:22-68.220.241.50:48470.service: Deactivated successfully. Mar 13 00:43:11.355456 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:43:11.356308 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:43:11.358254 systemd-logind[1523]: Removed session 12. Mar 13 00:43:16.389936 systemd[1]: Started sshd@14-172.236.108.24:22-68.220.241.50:36168.service - OpenSSH per-connection server daemon (68.220.241.50:36168). Mar 13 00:43:16.552093 sshd[6025]: Accepted publickey for core from 68.220.241.50 port 36168 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:16.553923 sshd-session[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:16.558175 systemd-logind[1523]: New session 13 of user core. Mar 13 00:43:16.563813 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:43:16.679951 sshd[6028]: Connection closed by 68.220.241.50 port 36168 Mar 13 00:43:16.680809 sshd-session[6025]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:16.684640 systemd[1]: sshd@14-172.236.108.24:22-68.220.241.50:36168.service: Deactivated successfully. Mar 13 00:43:16.686595 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:43:16.687391 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:43:16.688874 systemd-logind[1523]: Removed session 13. Mar 13 00:43:19.758193 kubelet[2729]: E0313 00:43:19.757381 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:43:21.715818 systemd[1]: Started sshd@15-172.236.108.24:22-68.220.241.50:36176.service - OpenSSH per-connection server daemon (68.220.241.50:36176). Mar 13 00:43:21.878403 sshd[6062]: Accepted publickey for core from 68.220.241.50 port 36176 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:21.880445 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:21.891848 systemd-logind[1523]: New session 14 of user core. Mar 13 00:43:21.897747 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:43:22.030040 sshd[6065]: Connection closed by 68.220.241.50 port 36176 Mar 13 00:43:22.030945 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:22.035467 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:43:22.036541 systemd[1]: sshd@15-172.236.108.24:22-68.220.241.50:36176.service: Deactivated successfully. Mar 13 00:43:22.039247 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:43:22.041471 systemd-logind[1523]: Removed session 14. Mar 13 00:43:22.058819 systemd[1]: Started sshd@16-172.236.108.24:22-68.220.241.50:36182.service - OpenSSH per-connection server daemon (68.220.241.50:36182). Mar 13 00:43:22.218268 sshd[6078]: Accepted publickey for core from 68.220.241.50 port 36182 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:22.219452 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:22.224663 systemd-logind[1523]: New session 15 of user core. Mar 13 00:43:22.229742 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:43:22.383283 sshd[6122]: Connection closed by 68.220.241.50 port 36182 Mar 13 00:43:22.386793 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:22.391293 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:43:22.392002 systemd[1]: sshd@16-172.236.108.24:22-68.220.241.50:36182.service: Deactivated successfully. Mar 13 00:43:22.393927 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:43:22.397003 systemd-logind[1523]: Removed session 15. Mar 13 00:43:22.415817 systemd[1]: Started sshd@17-172.236.108.24:22-68.220.241.50:58996.service - OpenSSH per-connection server daemon (68.220.241.50:58996). Mar 13 00:43:22.561857 sshd[6132]: Accepted publickey for core from 68.220.241.50 port 58996 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:22.564303 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:22.571579 systemd-logind[1523]: New session 16 of user core. Mar 13 00:43:22.580804 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:43:22.685251 sshd[6135]: Connection closed by 68.220.241.50 port 58996 Mar 13 00:43:22.686191 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:22.690004 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:43:22.691413 systemd[1]: sshd@17-172.236.108.24:22-68.220.241.50:58996.service: Deactivated successfully. Mar 13 00:43:22.695249 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:43:22.697520 systemd-logind[1523]: Removed session 16. Mar 13 00:43:27.721230 systemd[1]: Started sshd@18-172.236.108.24:22-68.220.241.50:59012.service - OpenSSH per-connection server daemon (68.220.241.50:59012). Mar 13 00:43:27.876681 sshd[6161]: Accepted publickey for core from 68.220.241.50 port 59012 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:27.878029 sshd-session[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:27.885248 systemd-logind[1523]: New session 17 of user core. Mar 13 00:43:27.889756 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:43:28.010225 sshd[6164]: Connection closed by 68.220.241.50 port 59012 Mar 13 00:43:28.011553 sshd-session[6161]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:28.016465 systemd[1]: sshd@18-172.236.108.24:22-68.220.241.50:59012.service: Deactivated successfully. Mar 13 00:43:28.018928 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:43:28.020051 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:43:28.022070 systemd-logind[1523]: Removed session 17. Mar 13 00:43:32.758312 kubelet[2729]: E0313 00:43:32.758263 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:43:33.040687 systemd[1]: Started sshd@19-172.236.108.24:22-68.220.241.50:57694.service - OpenSSH per-connection server daemon (68.220.241.50:57694). Mar 13 00:43:33.184514 sshd[6182]: Accepted publickey for core from 68.220.241.50 port 57694 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:33.186040 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:33.190673 systemd-logind[1523]: New session 18 of user core. Mar 13 00:43:33.196743 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:43:33.310883 sshd[6185]: Connection closed by 68.220.241.50 port 57694 Mar 13 00:43:33.311818 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:33.316558 systemd[1]: sshd@19-172.236.108.24:22-68.220.241.50:57694.service: Deactivated successfully. Mar 13 00:43:33.319337 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:43:33.320805 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:43:33.322309 systemd-logind[1523]: Removed session 18. Mar 13 00:43:33.340397 systemd[1]: Started sshd@20-172.236.108.24:22-68.220.241.50:57704.service - OpenSSH per-connection server daemon (68.220.241.50:57704). Mar 13 00:43:33.488562 sshd[6197]: Accepted publickey for core from 68.220.241.50 port 57704 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:33.489978 sshd-session[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:33.495076 systemd-logind[1523]: New session 19 of user core. Mar 13 00:43:33.501746 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:43:33.849278 sshd[6200]: Connection closed by 68.220.241.50 port 57704 Mar 13 00:43:33.851151 sshd-session[6197]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:33.855687 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:43:33.856107 systemd[1]: sshd@20-172.236.108.24:22-68.220.241.50:57704.service: Deactivated successfully. Mar 13 00:43:33.858902 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:43:33.861968 systemd-logind[1523]: Removed session 19. Mar 13 00:43:33.886827 systemd[1]: Started sshd@21-172.236.108.24:22-68.220.241.50:57720.service - OpenSSH per-connection server daemon (68.220.241.50:57720). Mar 13 00:43:34.049017 sshd[6210]: Accepted publickey for core from 68.220.241.50 port 57720 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:34.051252 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:34.060303 systemd-logind[1523]: New session 20 of user core. Mar 13 00:43:34.063754 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:43:34.641797 sshd[6213]: Connection closed by 68.220.241.50 port 57720 Mar 13 00:43:34.644545 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:34.649768 systemd[1]: sshd@21-172.236.108.24:22-68.220.241.50:57720.service: Deactivated successfully. Mar 13 00:43:34.654456 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:43:34.656148 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:43:34.660254 systemd-logind[1523]: Removed session 20. Mar 13 00:43:34.674183 systemd[1]: Started sshd@22-172.236.108.24:22-68.220.241.50:57736.service - OpenSSH per-connection server daemon (68.220.241.50:57736). Mar 13 00:43:34.830398 sshd[6234]: Accepted publickey for core from 68.220.241.50 port 57736 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:34.832597 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:34.839037 systemd-logind[1523]: New session 21 of user core. Mar 13 00:43:34.849813 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:43:35.090106 sshd[6240]: Connection closed by 68.220.241.50 port 57736 Mar 13 00:43:35.091852 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:35.096281 systemd[1]: sshd@22-172.236.108.24:22-68.220.241.50:57736.service: Deactivated successfully. Mar 13 00:43:35.098601 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:43:35.100266 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:43:35.102108 systemd-logind[1523]: Removed session 21. Mar 13 00:43:35.120702 systemd[1]: Started sshd@23-172.236.108.24:22-68.220.241.50:57744.service - OpenSSH per-connection server daemon (68.220.241.50:57744). Mar 13 00:43:35.271664 sshd[6273]: Accepted publickey for core from 68.220.241.50 port 57744 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:35.272969 sshd-session[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:35.277117 systemd-logind[1523]: New session 22 of user core. Mar 13 00:43:35.286735 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:43:35.397279 sshd[6276]: Connection closed by 68.220.241.50 port 57744 Mar 13 00:43:35.398272 sshd-session[6273]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:35.404379 systemd[1]: sshd@23-172.236.108.24:22-68.220.241.50:57744.service: Deactivated successfully. Mar 13 00:43:35.407190 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:43:35.408271 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:43:35.410377 systemd-logind[1523]: Removed session 22. Mar 13 00:43:39.757092 kubelet[2729]: E0313 00:43:39.757056 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:43:40.432027 systemd[1]: Started sshd@24-172.236.108.24:22-68.220.241.50:57746.service - OpenSSH per-connection server daemon (68.220.241.50:57746). Mar 13 00:43:40.575651 sshd[6311]: Accepted publickey for core from 68.220.241.50 port 57746 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:40.576831 sshd-session[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:40.581979 systemd-logind[1523]: New session 23 of user core. Mar 13 00:43:40.590757 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:43:40.698837 sshd[6314]: Connection closed by 68.220.241.50 port 57746 Mar 13 00:43:40.700785 sshd-session[6311]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:40.704808 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:43:40.705950 systemd[1]: sshd@24-172.236.108.24:22-68.220.241.50:57746.service: Deactivated successfully. Mar 13 00:43:40.708460 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:43:40.710693 systemd-logind[1523]: Removed session 23. Mar 13 00:43:45.733078 systemd[1]: Started sshd@25-172.236.108.24:22-68.220.241.50:45834.service - OpenSSH per-connection server daemon (68.220.241.50:45834). Mar 13 00:43:45.882250 sshd[6328]: Accepted publickey for core from 68.220.241.50 port 45834 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:45.883963 sshd-session[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:45.889061 systemd-logind[1523]: New session 24 of user core. Mar 13 00:43:45.893720 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:43:46.011351 sshd[6331]: Connection closed by 68.220.241.50 port 45834 Mar 13 00:43:46.012802 sshd-session[6328]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:46.017017 systemd-logind[1523]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:43:46.017816 systemd[1]: sshd@25-172.236.108.24:22-68.220.241.50:45834.service: Deactivated successfully. Mar 13 00:43:46.020523 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:43:46.022684 systemd-logind[1523]: Removed session 24. Mar 13 00:43:51.044031 systemd[1]: Started sshd@26-172.236.108.24:22-68.220.241.50:45846.service - OpenSSH per-connection server daemon (68.220.241.50:45846). Mar 13 00:43:51.197875 sshd[6388]: Accepted publickey for core from 68.220.241.50 port 45846 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:51.199744 sshd-session[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:51.208378 systemd-logind[1523]: New session 25 of user core. Mar 13 00:43:51.212738 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:43:51.315974 sshd[6391]: Connection closed by 68.220.241.50 port 45846 Mar 13 00:43:51.316210 sshd-session[6388]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:51.321429 systemd-logind[1523]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:43:51.322270 systemd[1]: sshd@26-172.236.108.24:22-68.220.241.50:45846.service: Deactivated successfully. Mar 13 00:43:51.324912 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:43:51.326351 systemd-logind[1523]: Removed session 25. Mar 13 00:43:56.347699 systemd[1]: Started sshd@27-172.236.108.24:22-68.220.241.50:36442.service - OpenSSH per-connection server daemon (68.220.241.50:36442). Mar 13 00:43:56.493922 sshd[6464]: Accepted publickey for core from 68.220.241.50 port 36442 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:43:56.495204 sshd-session[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:43:56.500183 systemd-logind[1523]: New session 26 of user core. Mar 13 00:43:56.504736 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 00:43:56.616391 sshd[6467]: Connection closed by 68.220.241.50 port 36442 Mar 13 00:43:56.618067 sshd-session[6464]: pam_unix(sshd:session): session closed for user core Mar 13 00:43:56.625010 systemd[1]: sshd@27-172.236.108.24:22-68.220.241.50:36442.service: Deactivated successfully. Mar 13 00:43:56.627587 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 00:43:56.628292 systemd-logind[1523]: Session 26 logged out. Waiting for processes to exit. Mar 13 00:43:56.629851 systemd-logind[1523]: Removed session 26. Mar 13 00:43:58.758651 kubelet[2729]: E0313 00:43:58.757896 2729 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19"