Mar 7 01:32:12.102905 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:32:12.102939 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:32:12.102952 kernel: BIOS-provided physical RAM map: Mar 7 01:32:12.102962 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:32:12.102972 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:32:12.102986 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:32:12.102997 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:32:12.103006 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:32:12.103016 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:32:12.103027 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:32:12.103035 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:32:12.103044 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:32:12.103055 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:32:12.103070 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:32:12.103082 kernel: NX (Execute Disable) protection: active Mar 7 01:32:12.103092 kernel: APIC: Static calls initialized Mar 7 01:32:12.103103 kernel: SMBIOS 2.8 present. Mar 7 01:32:12.103113 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:32:12.103123 kernel: Hypervisor detected: KVM Mar 7 01:32:12.103156 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:32:12.103167 kernel: kvm-clock: using sched offset of 6120650012 cycles Mar 7 01:32:12.103178 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:32:12.103190 kernel: tsc: Detected 1999.999 MHz processor Mar 7 01:32:12.103201 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:32:12.103212 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:32:12.103224 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:32:12.103235 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:32:12.103246 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:32:12.103261 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:32:12.103273 kernel: Using GB pages for direct mapping Mar 7 01:32:12.103284 kernel: ACPI: Early table checksum verification disabled Mar 7 01:32:12.103294 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:32:12.103305 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103318 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103328 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103338 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:32:12.103350 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103366 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103377 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103388 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103407 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:32:12.103418 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:32:12.103429 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:32:12.103444 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:32:12.103458 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:32:12.103469 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:32:12.103481 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:32:12.103759 kernel: No NUMA configuration found Mar 7 01:32:12.103778 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:32:12.103789 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 7 01:32:12.103801 kernel: Zone ranges: Mar 7 01:32:12.103820 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:32:12.103832 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:32:12.103843 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:32:12.103856 kernel: Movable zone start for each node Mar 7 01:32:12.103866 kernel: Early memory node ranges Mar 7 01:32:12.103878 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:32:12.103890 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:32:12.103901 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:32:12.103911 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:32:12.103923 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:32:12.103941 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:32:12.103952 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:32:12.103965 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:32:12.103976 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:32:12.103987 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:32:12.103998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:32:12.104010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:32:12.104021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:32:12.104032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:32:12.104050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:32:12.104061 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:32:12.104072 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:32:12.104084 kernel: TSC deadline timer available Mar 7 01:32:12.104096 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:32:12.104107 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:32:12.104118 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:32:12.104476 kernel: kvm-guest: setup PV sched yield Mar 7 01:32:12.104494 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:32:12.104512 kernel: Booting paravirtualized kernel on KVM Mar 7 01:32:12.104524 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:32:12.104536 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:32:12.104547 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:32:12.104558 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:32:12.104571 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:32:12.104581 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:32:12.104593 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:32:12.104607 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:32:12.104624 kernel: random: crng init done Mar 7 01:32:12.104636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:32:12.104648 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:32:12.104660 kernel: Fallback order for Node 0: 0 Mar 7 01:32:12.104671 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:32:12.104683 kernel: Policy zone: Normal Mar 7 01:32:12.104695 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:32:12.104706 kernel: software IO TLB: area num 2. Mar 7 01:32:12.104742 kernel: Memory: 3966216K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227296K reserved, 0K cma-reserved) Mar 7 01:32:12.104754 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:32:12.104765 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:32:12.104778 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:32:12.104790 kernel: Dynamic Preempt: voluntary Mar 7 01:32:12.104801 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:32:12.104813 kernel: rcu: RCU event tracing is enabled. Mar 7 01:32:12.104826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:32:12.104837 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:32:12.104855 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:32:12.104866 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:32:12.104877 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:32:12.104890 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:32:12.104901 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:32:12.104912 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:32:12.104925 kernel: Console: colour VGA+ 80x25 Mar 7 01:32:12.104936 kernel: printk: console [tty0] enabled Mar 7 01:32:12.104947 kernel: printk: console [ttyS0] enabled Mar 7 01:32:12.104964 kernel: ACPI: Core revision 20230628 Mar 7 01:32:12.104976 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:32:12.104987 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:32:12.105000 kernel: x2apic enabled Mar 7 01:32:12.105025 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:32:12.105042 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:32:12.105054 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:32:12.105066 kernel: kvm-guest: setup PV IPIs Mar 7 01:32:12.105079 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:32:12.105092 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:32:12.105104 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Mar 7 01:32:12.105117 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:32:12.105169 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:32:12.105183 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:32:12.105196 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:32:12.105257 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:32:12.105274 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:32:12.105292 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:32:12.105305 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:32:12.105317 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:32:12.105330 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:32:12.105344 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:32:12.105356 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:32:12.105368 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:32:12.105381 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:32:12.105398 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:32:12.105411 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:32:12.105424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:32:12.105436 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:32:12.105449 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:32:12.105461 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:32:12.105473 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:32:12.105487 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:32:12.105499 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:32:12.105515 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:32:12.105528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:32:12.105540 kernel: landlock: Up and running. Mar 7 01:32:12.105552 kernel: SELinux: Initializing. Mar 7 01:32:12.105564 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.105577 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.105589 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:32:12.105601 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:32:12.105614 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:32:12.105631 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:32:12.105643 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:32:12.105656 kernel: ... version: 0 Mar 7 01:32:12.105668 kernel: ... bit width: 48 Mar 7 01:32:12.105680 kernel: ... generic registers: 6 Mar 7 01:32:12.105692 kernel: ... value mask: 0000ffffffffffff Mar 7 01:32:12.105705 kernel: ... max period: 00007fffffffffff Mar 7 01:32:12.105716 kernel: ... fixed-purpose events: 0 Mar 7 01:32:12.105728 kernel: ... event mask: 000000000000003f Mar 7 01:32:12.105746 kernel: signal: max sigframe size: 3376 Mar 7 01:32:12.105758 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:32:12.105770 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:32:12.105783 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:32:12.105794 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:32:12.105806 kernel: .... node #0, CPUs: #1 Mar 7 01:32:12.105819 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:32:12.105831 kernel: smpboot: Max logical packages: 1 Mar 7 01:32:12.105843 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 7 01:32:12.105860 kernel: devtmpfs: initialized Mar 7 01:32:12.105872 kernel: x86/mm: Memory block size: 128MB Mar 7 01:32:12.105884 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:32:12.105895 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:32:12.105908 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:32:12.105921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:32:12.105933 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:32:12.105945 kernel: audit: type=2000 audit(1772847130.737:1): state=initialized audit_enabled=0 res=1 Mar 7 01:32:12.105956 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:32:12.105975 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:32:12.105986 kernel: cpuidle: using governor menu Mar 7 01:32:12.105998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:32:12.106009 kernel: dca service started, version 1.12.1 Mar 7 01:32:12.106023 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:32:12.106034 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:32:12.106046 kernel: PCI: Using configuration type 1 for base access Mar 7 01:32:12.106057 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:32:12.106071 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:32:12.106087 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:32:12.106099 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:32:12.106111 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:32:12.106124 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:32:12.106187 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:32:12.106200 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:32:12.106214 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:32:12.106226 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:32:12.106238 kernel: ACPI: Interpreter enabled Mar 7 01:32:12.106257 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:32:12.106269 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:32:12.106282 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:32:12.106294 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:32:12.106306 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:32:12.106319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:32:12.106628 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:32:12.107295 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:32:12.107504 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:32:12.107521 kernel: PCI host bridge to bus 0000:00 Mar 7 01:32:12.107717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:32:12.107894 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:32:12.109250 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:32:12.110195 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:32:12.110395 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:32:12.112930 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:32:12.113114 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:32:12.114438 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:32:12.114654 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:32:12.114874 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:32:12.115110 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:32:12.115665 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:32:12.115862 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:32:12.116076 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:32:12.116403 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:32:12.116599 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:32:12.116795 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:32:12.117002 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:32:12.117248 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:32:12.117441 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:32:12.118256 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:32:12.118454 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:32:12.118665 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:32:12.118859 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:32:12.119072 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:32:12.120384 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:32:12.120585 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:32:12.120793 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:32:12.120984 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:32:12.121003 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:32:12.121016 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:32:12.121028 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:32:12.121048 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:32:12.121061 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:32:12.121074 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:32:12.121086 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:32:12.121097 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:32:12.121110 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:32:12.121123 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:32:12.121154 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:32:12.121167 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:32:12.121184 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:32:12.121197 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:32:12.121209 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:32:12.121220 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:32:12.121232 kernel: iommu: Default domain type: Translated Mar 7 01:32:12.121245 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:32:12.121257 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:32:12.121269 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:32:12.121281 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:32:12.121299 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:32:12.121489 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:32:12.121677 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:32:12.121866 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:32:12.121885 kernel: vgaarb: loaded Mar 7 01:32:12.121897 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:32:12.121909 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:32:12.121923 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:32:12.121940 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:32:12.121953 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:32:12.121965 kernel: pnp: PnP ACPI init Mar 7 01:32:12.124275 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:32:12.124298 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:32:12.124313 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:32:12.124326 kernel: NET: Registered PF_INET protocol family Mar 7 01:32:12.124338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:32:12.124358 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:32:12.124370 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:32:12.124384 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:32:12.124397 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:32:12.124408 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:32:12.124422 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.124435 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.124447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:32:12.124460 kernel: NET: Registered PF_XDP protocol family Mar 7 01:32:12.124651 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:32:12.124850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:32:12.125009 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:32:12.127202 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:32:12.127330 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:32:12.127448 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:32:12.127458 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:32:12.127466 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:32:12.127478 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:32:12.127485 kernel: Initialise system trusted keyrings Mar 7 01:32:12.127493 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:32:12.127500 kernel: Key type asymmetric registered Mar 7 01:32:12.127507 kernel: Asymmetric key parser 'x509' registered Mar 7 01:32:12.127514 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:32:12.127521 kernel: io scheduler mq-deadline registered Mar 7 01:32:12.127529 kernel: io scheduler kyber registered Mar 7 01:32:12.127536 kernel: io scheduler bfq registered Mar 7 01:32:12.127543 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:32:12.127554 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:32:12.127561 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:32:12.127568 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:32:12.127575 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:32:12.127582 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:32:12.127589 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:32:12.127596 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:32:12.127731 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:32:12.127747 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:32:12.127865 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:32:12.127983 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:32:11 UTC (1772847131) Mar 7 01:32:12.128102 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:32:12.128112 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:32:12.128120 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:32:12.128157 kernel: Segment Routing with IPv6 Mar 7 01:32:12.128166 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:32:12.128177 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:32:12.128184 kernel: Key type dns_resolver registered Mar 7 01:32:12.128192 kernel: IPI shorthand broadcast: enabled Mar 7 01:32:12.128199 kernel: sched_clock: Marking stable (987006432, 343179807)->(1462387251, -132201012) Mar 7 01:32:12.128206 kernel: registered taskstats version 1 Mar 7 01:32:12.128213 kernel: Loading compiled-in X.509 certificates Mar 7 01:32:12.128220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:32:12.128227 kernel: Key type .fscrypt registered Mar 7 01:32:12.128235 kernel: Key type fscrypt-provisioning registered Mar 7 01:32:12.128245 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:32:12.128251 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:32:12.128259 kernel: ima: No architecture policies found Mar 7 01:32:12.128266 kernel: clk: Disabling unused clocks Mar 7 01:32:12.128273 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:32:12.128280 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:32:12.128287 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:32:12.128294 kernel: Run /init as init process Mar 7 01:32:12.128301 kernel: with arguments: Mar 7 01:32:12.128311 kernel: /init Mar 7 01:32:12.128317 kernel: with environment: Mar 7 01:32:12.128324 kernel: HOME=/ Mar 7 01:32:12.128331 kernel: TERM=linux Mar 7 01:32:12.128341 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:32:12.128351 systemd[1]: Detected virtualization kvm. Mar 7 01:32:12.128359 systemd[1]: Detected architecture x86-64. Mar 7 01:32:12.128366 systemd[1]: Running in initrd. Mar 7 01:32:12.128376 systemd[1]: No hostname configured, using default hostname. Mar 7 01:32:12.128383 systemd[1]: Hostname set to . Mar 7 01:32:12.128390 systemd[1]: Initializing machine ID from random generator. Mar 7 01:32:12.128398 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:32:12.128405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:32:12.128428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:32:12.128442 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:32:12.128449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:32:12.128457 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:32:12.128465 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:32:12.128474 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:32:12.128482 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:32:12.128492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:32:12.128500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:32:12.128508 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:32:12.128515 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:32:12.128523 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:32:12.128531 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:32:12.128539 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:32:12.128546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:32:12.128554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:32:12.128564 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:32:12.128572 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:32:12.128580 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:32:12.128587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:32:12.128595 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:32:12.128603 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:32:12.128611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:32:12.128618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:32:12.128629 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:32:12.128636 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:32:12.128644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:32:12.128652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:32:12.128660 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:32:12.128692 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:32:12.128717 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:32:12.128725 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:32:12.128736 systemd-journald[178]: Journal started Mar 7 01:32:12.128753 systemd-journald[178]: Runtime Journal (/run/log/journal/39f49782e17e4a31b905953db8acdc15) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:32:12.112891 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:32:12.137164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:32:12.143161 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:32:12.144162 kernel: Bridge firewalling registered Mar 7 01:32:12.144189 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:32:12.237166 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:32:12.238007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:32:12.240105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:12.241199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:32:12.248258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:32:12.251108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:32:12.254275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:32:12.286279 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:32:12.290187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:12.296719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:32:12.298901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:32:12.306302 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:32:12.308813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:32:12.313301 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:32:12.317704 dracut-cmdline[211]: dracut-dracut-053 Mar 7 01:32:12.322328 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:32:12.359251 systemd-resolved[217]: Positive Trust Anchors: Mar 7 01:32:12.359264 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:32:12.359292 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:32:12.366403 systemd-resolved[217]: Defaulting to hostname 'linux'. Mar 7 01:32:12.368067 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:32:12.369784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:32:12.409181 kernel: SCSI subsystem initialized Mar 7 01:32:12.418151 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:32:12.429349 kernel: iscsi: registered transport (tcp) Mar 7 01:32:12.450191 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:32:12.450238 kernel: QLogic iSCSI HBA Driver Mar 7 01:32:12.491824 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:32:12.497284 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:32:12.524325 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:32:12.524385 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:32:12.526570 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:32:12.575173 kernel: raid6: avx2x4 gen() 25665 MB/s Mar 7 01:32:12.593161 kernel: raid6: avx2x2 gen() 21370 MB/s Mar 7 01:32:12.611296 kernel: raid6: avx2x1 gen() 18981 MB/s Mar 7 01:32:12.611351 kernel: raid6: using algorithm avx2x4 gen() 25665 MB/s Mar 7 01:32:12.631494 kernel: raid6: .... xor() 2968 MB/s, rmw enabled Mar 7 01:32:12.631546 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:32:12.656163 kernel: xor: automatically using best checksumming function avx Mar 7 01:32:12.789180 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:32:12.804208 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:32:12.811277 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:32:12.825821 systemd-udevd[396]: Using default interface naming scheme 'v255'. Mar 7 01:32:12.830585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:32:12.839270 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:32:12.857388 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Mar 7 01:32:12.896557 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:32:12.906341 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:32:13.000735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:32:13.009281 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:32:13.032355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:32:13.038088 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:32:13.040254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:32:13.041955 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:32:13.048275 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:32:13.070876 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:32:13.087190 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:32:13.104231 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:32:13.115489 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:32:13.115520 kernel: AES CTR mode by8 optimization enabled Mar 7 01:32:13.120956 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:32:13.140161 kernel: libata version 3.00 loaded. Mar 7 01:32:13.143884 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:32:13.144032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:13.146360 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:32:13.147961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:32:13.148221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:13.149839 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:32:13.160424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:32:13.176196 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:32:13.181761 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:32:13.186580 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:32:13.186791 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:32:13.379552 kernel: scsi host1: ahci Mar 7 01:32:13.379847 kernel: scsi host2: ahci Mar 7 01:32:13.380017 kernel: scsi host3: ahci Mar 7 01:32:13.380197 kernel: scsi host4: ahci Mar 7 01:32:13.385776 kernel: scsi host5: ahci Mar 7 01:32:13.385971 kernel: scsi host6: ahci Mar 7 01:32:13.391489 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 Mar 7 01:32:13.391520 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 Mar 7 01:32:13.391541 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 Mar 7 01:32:13.391560 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 Mar 7 01:32:13.391578 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 Mar 7 01:32:13.391595 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 Mar 7 01:32:13.534680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:13.545354 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:32:13.565749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:13.694608 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.694707 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.705866 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.705927 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.734158 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.734206 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.750462 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:32:13.777429 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:32:13.777815 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:32:13.779428 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:32:13.779607 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:32:13.789003 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:32:13.789032 kernel: GPT:9289727 != 167739391 Mar 7 01:32:13.789044 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:32:13.792738 kernel: GPT:9289727 != 167739391 Mar 7 01:32:13.792758 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:32:13.795406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:13.799151 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:32:13.836161 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (444) Mar 7 01:32:13.841806 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (463) Mar 7 01:32:13.848343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:32:13.861185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:32:13.869482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:32:13.876345 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:32:13.877325 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:32:13.885285 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:32:13.891253 disk-uuid[566]: Primary Header is updated. Mar 7 01:32:13.891253 disk-uuid[566]: Secondary Entries is updated. Mar 7 01:32:13.891253 disk-uuid[566]: Secondary Header is updated. Mar 7 01:32:13.897168 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:13.904171 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:14.907178 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:14.910112 disk-uuid[567]: The operation has completed successfully. Mar 7 01:32:14.990536 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:32:14.990723 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:32:15.012308 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:32:15.020386 sh[581]: Success Mar 7 01:32:15.039185 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:32:15.095387 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:32:15.106418 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:32:15.109464 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:32:15.140346 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:32:15.140421 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:15.140443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:32:15.145215 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:32:15.148206 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:32:15.160158 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:32:15.163343 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:32:15.165004 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:32:15.188517 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:32:15.193228 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:32:15.207899 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:15.207942 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:15.212791 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:32:15.223889 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:32:15.223925 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:32:15.237095 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:32:15.243351 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:15.249770 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:32:15.258801 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:32:15.342411 ignition[687]: Ignition 2.19.0 Mar 7 01:32:15.343277 ignition[687]: Stage: fetch-offline Mar 7 01:32:15.343084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:32:15.343329 ignition[687]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:15.343341 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:15.344096 ignition[687]: parsed url from cmdline: "" Mar 7 01:32:15.344102 ignition[687]: no config URL provided Mar 7 01:32:15.344109 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:32:15.344122 ignition[687]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:32:15.351312 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:32:15.344149 ignition[687]: failed to fetch config: resource requires networking Mar 7 01:32:15.352944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:32:15.344354 ignition[687]: Ignition finished successfully Mar 7 01:32:15.394754 systemd-networkd[767]: lo: Link UP Mar 7 01:32:15.394765 systemd-networkd[767]: lo: Gained carrier Mar 7 01:32:15.397089 systemd-networkd[767]: Enumeration completed Mar 7 01:32:15.397628 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:32:15.397633 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:32:15.399601 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:32:15.399990 systemd-networkd[767]: eth0: Link UP Mar 7 01:32:15.399996 systemd-networkd[767]: eth0: Gained carrier Mar 7 01:32:15.400017 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:32:15.401973 systemd[1]: Reached target network.target - Network. Mar 7 01:32:15.411302 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:32:15.426018 ignition[770]: Ignition 2.19.0 Mar 7 01:32:15.426032 ignition[770]: Stage: fetch Mar 7 01:32:15.426238 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:15.426251 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:15.426333 ignition[770]: parsed url from cmdline: "" Mar 7 01:32:15.426337 ignition[770]: no config URL provided Mar 7 01:32:15.426343 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:32:15.426352 ignition[770]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:32:15.426376 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:32:15.426516 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:32:15.627027 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:32:15.627214 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:32:16.027921 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:32:16.028223 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:32:16.221224 systemd-networkd[767]: eth0: DHCPv4 address 172.238.171.132/24, gateway 172.238.171.1 acquired from 23.194.118.60 Mar 7 01:32:16.828852 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:32:16.926566 ignition[770]: PUT result: OK Mar 7 01:32:16.926640 ignition[770]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:32:16.964473 systemd-networkd[767]: eth0: Gained IPv6LL Mar 7 01:32:17.036303 ignition[770]: GET result: OK Mar 7 01:32:17.036446 ignition[770]: parsing config with SHA512: d2676ab1b62f08783c0f7bd69e294da0abfcc8fdf79dbd85711ae425d76de7ff3fb6316fe567979edce0ad639372388c349448906ff35ba7e684292db59a1eb9 Mar 7 01:32:17.041644 unknown[770]: fetched base config from "system" Mar 7 01:32:17.041664 unknown[770]: fetched base config from "system" Mar 7 01:32:17.042273 ignition[770]: fetch: fetch complete Mar 7 01:32:17.041674 unknown[770]: fetched user config from "akamai" Mar 7 01:32:17.042286 ignition[770]: fetch: fetch passed Mar 7 01:32:17.045858 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:32:17.042354 ignition[770]: Ignition finished successfully Mar 7 01:32:17.053302 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:32:17.079194 ignition[778]: Ignition 2.19.0 Mar 7 01:32:17.079213 ignition[778]: Stage: kargs Mar 7 01:32:17.079398 ignition[778]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:17.079411 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:17.083554 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:32:17.080294 ignition[778]: kargs: kargs passed Mar 7 01:32:17.080343 ignition[778]: Ignition finished successfully Mar 7 01:32:17.091456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:32:17.104991 ignition[784]: Ignition 2.19.0 Mar 7 01:32:17.105005 ignition[784]: Stage: disks Mar 7 01:32:17.105179 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:17.105191 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:17.108099 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:32:17.105899 ignition[784]: disks: disks passed Mar 7 01:32:17.131873 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:32:17.105947 ignition[784]: Ignition finished successfully Mar 7 01:32:17.133456 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:32:17.134977 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:32:17.136404 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:32:17.137997 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:32:17.147304 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:32:17.164612 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:32:17.167725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:32:17.176271 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:32:17.266169 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:32:17.266715 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:32:17.268297 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:32:17.279228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:32:17.282726 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:32:17.283777 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:32:17.283823 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:32:17.283852 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:32:17.290545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:32:17.300159 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (800) Mar 7 01:32:17.307152 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:17.307180 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:17.305298 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:32:17.312107 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:32:17.316505 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:32:17.316530 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:32:17.321425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:32:17.356501 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:32:17.363271 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:32:17.369303 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:32:17.373470 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:32:17.469375 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:32:17.481239 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:32:17.486453 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:32:17.493778 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:32:17.495382 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:17.518950 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:32:17.520990 ignition[913]: INFO : Ignition 2.19.0 Mar 7 01:32:17.520990 ignition[913]: INFO : Stage: mount Mar 7 01:32:17.520990 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:17.520990 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:17.524527 ignition[913]: INFO : mount: mount passed Mar 7 01:32:17.524527 ignition[913]: INFO : Ignition finished successfully Mar 7 01:32:17.524594 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:32:17.530228 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:32:18.272456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:32:18.286161 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (926) Mar 7 01:32:18.290529 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:18.290551 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:18.293236 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:32:18.299521 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:32:18.299545 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:32:18.303658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:32:18.327524 ignition[942]: INFO : Ignition 2.19.0 Mar 7 01:32:18.327524 ignition[942]: INFO : Stage: files Mar 7 01:32:18.329528 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:18.329528 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:18.329528 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:32:18.329528 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:32:18.329528 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:32:18.334803 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:32:18.334803 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:32:18.336942 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:32:18.335270 unknown[942]: wrote ssh authorized keys file for user: core Mar 7 01:32:18.650789 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:32:18.825837 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:32:18.825837 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:32:19.298218 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:32:19.778002 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:19.778002 ignition[942]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:32:19.809694 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:32:19.809694 ignition[942]: INFO : files: files passed Mar 7 01:32:19.809694 ignition[942]: INFO : Ignition finished successfully Mar 7 01:32:19.783955 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:32:19.818367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:32:19.824969 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:32:19.829201 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:32:19.829992 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:32:19.851437 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:32:19.853802 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:32:19.855113 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:32:19.854498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:32:19.856685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:32:19.863346 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:32:19.900879 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:32:19.901032 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:32:19.904496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:32:19.905623 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:32:19.907507 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:32:19.913316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:32:19.933374 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:32:19.939482 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:32:19.954486 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:32:19.956840 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:32:19.959043 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:32:19.960100 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:32:19.960332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:32:19.962720 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:32:19.964019 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:32:19.965667 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:32:19.967502 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:32:19.969067 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:32:19.971085 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:32:19.972674 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:32:19.974616 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:32:19.976764 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:32:19.978477 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:32:19.980302 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:32:19.980415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:32:19.982558 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:32:19.983745 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:32:19.985511 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:32:19.986012 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:32:19.987483 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:32:19.987594 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:32:19.989726 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:32:19.989840 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:32:19.991039 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:32:19.991159 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:32:19.998341 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:32:20.001051 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:32:20.001962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:32:20.005351 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:32:20.007638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:32:20.007742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:32:20.018698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:32:20.018820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:32:20.025146 ignition[996]: INFO : Ignition 2.19.0 Mar 7 01:32:20.025146 ignition[996]: INFO : Stage: umount Mar 7 01:32:20.025146 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:20.025146 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:20.035619 ignition[996]: INFO : umount: umount passed Mar 7 01:32:20.035619 ignition[996]: INFO : Ignition finished successfully Mar 7 01:32:20.030605 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:32:20.030735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:32:20.032073 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:32:20.032170 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:32:20.033381 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:32:20.033434 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:32:20.036361 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:32:20.036412 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:32:20.038814 systemd[1]: Stopped target network.target - Network. Mar 7 01:32:20.064782 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:32:20.064879 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:32:20.066670 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:32:20.068259 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:32:20.073176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:32:20.074493 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:32:20.076040 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:32:20.077684 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:32:20.077741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:32:20.079738 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:32:20.079791 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:32:20.081777 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:32:20.081833 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:32:20.083303 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:32:20.083352 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:32:20.085363 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:32:20.087662 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:32:20.089194 systemd-networkd[767]: eth0: DHCPv6 lease lost Mar 7 01:32:20.093052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:32:20.094011 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:32:20.094507 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:32:20.098673 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:32:20.098809 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:32:20.103520 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:32:20.103697 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:32:20.106774 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:32:20.106855 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:32:20.109273 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:32:20.109331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:32:20.117443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:32:20.118517 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:32:20.118577 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:32:20.123228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:32:20.123472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:32:20.125432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:32:20.125487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:32:20.127315 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:32:20.127366 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:32:20.129686 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:32:20.155402 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:32:20.155622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:32:20.156930 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:32:20.156983 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:32:20.160240 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:32:20.160282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:32:20.161417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:32:20.161478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:32:20.163307 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:32:20.163361 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:32:20.165804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:32:20.165859 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:20.173331 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:32:20.174332 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:32:20.174423 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:32:20.179004 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:32:20.179082 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:32:20.182574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:32:20.182630 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:32:20.184568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:32:20.184623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:20.188303 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:32:20.188431 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:32:20.197847 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:32:20.197984 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:32:20.200569 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:32:20.208279 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:32:20.221650 systemd[1]: Switching root. Mar 7 01:32:20.257458 systemd-journald[178]: Journal stopped Mar 7 01:32:12.102905 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:32:12.102939 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:32:12.102952 kernel: BIOS-provided physical RAM map: Mar 7 01:32:12.102962 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:32:12.102972 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:32:12.102986 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:32:12.102997 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:32:12.103006 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:32:12.103016 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:32:12.103027 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:32:12.103035 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:32:12.103044 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:32:12.103055 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:32:12.103070 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:32:12.103082 kernel: NX (Execute Disable) protection: active Mar 7 01:32:12.103092 kernel: APIC: Static calls initialized Mar 7 01:32:12.103103 kernel: SMBIOS 2.8 present. Mar 7 01:32:12.103113 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:32:12.103123 kernel: Hypervisor detected: KVM Mar 7 01:32:12.103156 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:32:12.103167 kernel: kvm-clock: using sched offset of 6120650012 cycles Mar 7 01:32:12.103178 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:32:12.103190 kernel: tsc: Detected 1999.999 MHz processor Mar 7 01:32:12.103201 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:32:12.103212 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:32:12.103224 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:32:12.103235 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:32:12.103246 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:32:12.103261 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:32:12.103273 kernel: Using GB pages for direct mapping Mar 7 01:32:12.103284 kernel: ACPI: Early table checksum verification disabled Mar 7 01:32:12.103294 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:32:12.103305 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103318 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103328 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103338 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:32:12.103350 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103366 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103377 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103388 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:32:12.103407 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:32:12.103418 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:32:12.103429 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:32:12.103444 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:32:12.103458 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:32:12.103469 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:32:12.103481 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:32:12.103759 kernel: No NUMA configuration found Mar 7 01:32:12.103778 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:32:12.103789 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 7 01:32:12.103801 kernel: Zone ranges: Mar 7 01:32:12.103820 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:32:12.103832 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:32:12.103843 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:32:12.103856 kernel: Movable zone start for each node Mar 7 01:32:12.103866 kernel: Early memory node ranges Mar 7 01:32:12.103878 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:32:12.103890 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:32:12.103901 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:32:12.103911 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:32:12.103923 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:32:12.103941 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:32:12.103952 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:32:12.103965 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:32:12.103976 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:32:12.103987 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:32:12.103998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:32:12.104010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:32:12.104021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:32:12.104032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:32:12.104050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:32:12.104061 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:32:12.104072 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:32:12.104084 kernel: TSC deadline timer available Mar 7 01:32:12.104096 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:32:12.104107 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:32:12.104118 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:32:12.104476 kernel: kvm-guest: setup PV sched yield Mar 7 01:32:12.104494 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:32:12.104512 kernel: Booting paravirtualized kernel on KVM Mar 7 01:32:12.104524 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:32:12.104536 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:32:12.104547 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:32:12.104558 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:32:12.104571 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:32:12.104581 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:32:12.104593 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:32:12.104607 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:32:12.104624 kernel: random: crng init done Mar 7 01:32:12.104636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:32:12.104648 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:32:12.104660 kernel: Fallback order for Node 0: 0 Mar 7 01:32:12.104671 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:32:12.104683 kernel: Policy zone: Normal Mar 7 01:32:12.104695 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:32:12.104706 kernel: software IO TLB: area num 2. Mar 7 01:32:12.104742 kernel: Memory: 3966216K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227296K reserved, 0K cma-reserved) Mar 7 01:32:12.104754 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:32:12.104765 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:32:12.104778 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:32:12.104790 kernel: Dynamic Preempt: voluntary Mar 7 01:32:12.104801 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:32:12.104813 kernel: rcu: RCU event tracing is enabled. Mar 7 01:32:12.104826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:32:12.104837 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:32:12.104855 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:32:12.104866 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:32:12.104877 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:32:12.104890 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:32:12.104901 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:32:12.104912 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:32:12.104925 kernel: Console: colour VGA+ 80x25 Mar 7 01:32:12.104936 kernel: printk: console [tty0] enabled Mar 7 01:32:12.104947 kernel: printk: console [ttyS0] enabled Mar 7 01:32:12.104964 kernel: ACPI: Core revision 20230628 Mar 7 01:32:12.104976 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:32:12.104987 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:32:12.105000 kernel: x2apic enabled Mar 7 01:32:12.105025 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:32:12.105042 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:32:12.105054 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:32:12.105066 kernel: kvm-guest: setup PV IPIs Mar 7 01:32:12.105079 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:32:12.105092 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:32:12.105104 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Mar 7 01:32:12.105117 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:32:12.105169 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:32:12.105183 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:32:12.105196 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:32:12.105257 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:32:12.105274 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:32:12.105292 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:32:12.105305 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:32:12.105317 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:32:12.105330 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:32:12.105344 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:32:12.105356 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:32:12.105368 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:32:12.105381 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:32:12.105398 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:32:12.105411 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:32:12.105424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:32:12.105436 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:32:12.105449 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:32:12.105461 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:32:12.105473 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:32:12.105487 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:32:12.105499 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:32:12.105515 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:32:12.105528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:32:12.105540 kernel: landlock: Up and running. Mar 7 01:32:12.105552 kernel: SELinux: Initializing. Mar 7 01:32:12.105564 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.105577 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.105589 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:32:12.105601 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:32:12.105614 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:32:12.105631 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:32:12.105643 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:32:12.105656 kernel: ... version: 0 Mar 7 01:32:12.105668 kernel: ... bit width: 48 Mar 7 01:32:12.105680 kernel: ... generic registers: 6 Mar 7 01:32:12.105692 kernel: ... value mask: 0000ffffffffffff Mar 7 01:32:12.105705 kernel: ... max period: 00007fffffffffff Mar 7 01:32:12.105716 kernel: ... fixed-purpose events: 0 Mar 7 01:32:12.105728 kernel: ... event mask: 000000000000003f Mar 7 01:32:12.105746 kernel: signal: max sigframe size: 3376 Mar 7 01:32:12.105758 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:32:12.105770 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:32:12.105783 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:32:12.105794 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:32:12.105806 kernel: .... node #0, CPUs: #1 Mar 7 01:32:12.105819 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:32:12.105831 kernel: smpboot: Max logical packages: 1 Mar 7 01:32:12.105843 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 7 01:32:12.105860 kernel: devtmpfs: initialized Mar 7 01:32:12.105872 kernel: x86/mm: Memory block size: 128MB Mar 7 01:32:12.105884 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:32:12.105895 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:32:12.105908 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:32:12.105921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:32:12.105933 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:32:12.105945 kernel: audit: type=2000 audit(1772847130.737:1): state=initialized audit_enabled=0 res=1 Mar 7 01:32:12.105956 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:32:12.105975 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:32:12.105986 kernel: cpuidle: using governor menu Mar 7 01:32:12.105998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:32:12.106009 kernel: dca service started, version 1.12.1 Mar 7 01:32:12.106023 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:32:12.106034 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:32:12.106046 kernel: PCI: Using configuration type 1 for base access Mar 7 01:32:12.106057 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:32:12.106071 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:32:12.106087 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:32:12.106099 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:32:12.106111 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:32:12.106124 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:32:12.106187 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:32:12.106200 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:32:12.106214 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:32:12.106226 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:32:12.106238 kernel: ACPI: Interpreter enabled Mar 7 01:32:12.106257 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:32:12.106269 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:32:12.106282 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:32:12.106294 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:32:12.106306 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:32:12.106319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:32:12.106628 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:32:12.107295 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:32:12.107504 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:32:12.107521 kernel: PCI host bridge to bus 0000:00 Mar 7 01:32:12.107717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:32:12.107894 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:32:12.109250 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:32:12.110195 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:32:12.110395 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:32:12.112930 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:32:12.113114 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:32:12.114438 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:32:12.114654 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:32:12.114874 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:32:12.115110 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:32:12.115665 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:32:12.115862 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:32:12.116076 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:32:12.116403 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:32:12.116599 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:32:12.116795 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:32:12.117002 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:32:12.117248 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:32:12.117441 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:32:12.118256 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:32:12.118454 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:32:12.118665 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:32:12.118859 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:32:12.119072 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:32:12.120384 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:32:12.120585 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:32:12.120793 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:32:12.120984 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:32:12.121003 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:32:12.121016 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:32:12.121028 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:32:12.121048 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:32:12.121061 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:32:12.121074 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:32:12.121086 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:32:12.121097 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:32:12.121110 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:32:12.121123 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:32:12.121154 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:32:12.121167 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:32:12.121184 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:32:12.121197 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:32:12.121209 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:32:12.121220 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:32:12.121232 kernel: iommu: Default domain type: Translated Mar 7 01:32:12.121245 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:32:12.121257 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:32:12.121269 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:32:12.121281 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:32:12.121299 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:32:12.121489 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:32:12.121677 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:32:12.121866 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:32:12.121885 kernel: vgaarb: loaded Mar 7 01:32:12.121897 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:32:12.121909 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:32:12.121923 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:32:12.121940 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:32:12.121953 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:32:12.121965 kernel: pnp: PnP ACPI init Mar 7 01:32:12.124275 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:32:12.124298 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:32:12.124313 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:32:12.124326 kernel: NET: Registered PF_INET protocol family Mar 7 01:32:12.124338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:32:12.124358 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:32:12.124370 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:32:12.124384 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:32:12.124397 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:32:12.124408 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:32:12.124422 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.124435 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:32:12.124447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:32:12.124460 kernel: NET: Registered PF_XDP protocol family Mar 7 01:32:12.124651 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:32:12.124850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:32:12.125009 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:32:12.127202 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:32:12.127330 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:32:12.127448 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:32:12.127458 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:32:12.127466 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:32:12.127478 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:32:12.127485 kernel: Initialise system trusted keyrings Mar 7 01:32:12.127493 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:32:12.127500 kernel: Key type asymmetric registered Mar 7 01:32:12.127507 kernel: Asymmetric key parser 'x509' registered Mar 7 01:32:12.127514 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:32:12.127521 kernel: io scheduler mq-deadline registered Mar 7 01:32:12.127529 kernel: io scheduler kyber registered Mar 7 01:32:12.127536 kernel: io scheduler bfq registered Mar 7 01:32:12.127543 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:32:12.127554 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:32:12.127561 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:32:12.127568 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:32:12.127575 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:32:12.127582 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:32:12.127589 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:32:12.127596 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:32:12.127731 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:32:12.127747 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:32:12.127865 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:32:12.127983 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:32:11 UTC (1772847131) Mar 7 01:32:12.128102 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:32:12.128112 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:32:12.128120 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:32:12.128157 kernel: Segment Routing with IPv6 Mar 7 01:32:12.128166 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:32:12.128177 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:32:12.128184 kernel: Key type dns_resolver registered Mar 7 01:32:12.128192 kernel: IPI shorthand broadcast: enabled Mar 7 01:32:12.128199 kernel: sched_clock: Marking stable (987006432, 343179807)->(1462387251, -132201012) Mar 7 01:32:12.128206 kernel: registered taskstats version 1 Mar 7 01:32:12.128213 kernel: Loading compiled-in X.509 certificates Mar 7 01:32:12.128220 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:32:12.128227 kernel: Key type .fscrypt registered Mar 7 01:32:12.128235 kernel: Key type fscrypt-provisioning registered Mar 7 01:32:12.128245 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:32:12.128251 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:32:12.128259 kernel: ima: No architecture policies found Mar 7 01:32:12.128266 kernel: clk: Disabling unused clocks Mar 7 01:32:12.128273 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:32:12.128280 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:32:12.128287 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:32:12.128294 kernel: Run /init as init process Mar 7 01:32:12.128301 kernel: with arguments: Mar 7 01:32:12.128311 kernel: /init Mar 7 01:32:12.128317 kernel: with environment: Mar 7 01:32:12.128324 kernel: HOME=/ Mar 7 01:32:12.128331 kernel: TERM=linux Mar 7 01:32:12.128341 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:32:12.128351 systemd[1]: Detected virtualization kvm. Mar 7 01:32:12.128359 systemd[1]: Detected architecture x86-64. Mar 7 01:32:12.128366 systemd[1]: Running in initrd. Mar 7 01:32:12.128376 systemd[1]: No hostname configured, using default hostname. Mar 7 01:32:12.128383 systemd[1]: Hostname set to . Mar 7 01:32:12.128390 systemd[1]: Initializing machine ID from random generator. Mar 7 01:32:12.128398 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:32:12.128405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:32:12.128428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:32:12.128442 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:32:12.128449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:32:12.128457 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:32:12.128465 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:32:12.128474 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:32:12.128482 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:32:12.128492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:32:12.128500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:32:12.128508 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:32:12.128515 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:32:12.128523 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:32:12.128531 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:32:12.128539 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:32:12.128546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:32:12.128554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:32:12.128564 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:32:12.128572 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:32:12.128580 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:32:12.128587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:32:12.128595 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:32:12.128603 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:32:12.128611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:32:12.128618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:32:12.128629 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:32:12.128636 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:32:12.128644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:32:12.128652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:32:12.128660 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:32:12.128692 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:32:12.128717 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:32:12.128725 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:32:12.128736 systemd-journald[178]: Journal started Mar 7 01:32:12.128753 systemd-journald[178]: Runtime Journal (/run/log/journal/39f49782e17e4a31b905953db8acdc15) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:32:12.112891 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:32:12.137164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:32:12.143161 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:32:12.144162 kernel: Bridge firewalling registered Mar 7 01:32:12.144189 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:32:12.237166 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:32:12.238007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:32:12.240105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:12.241199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:32:12.248258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:32:12.251108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:32:12.254275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:32:12.286279 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:32:12.290187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:12.296719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:32:12.298901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:32:12.306302 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:32:12.308813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:32:12.313301 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:32:12.317704 dracut-cmdline[211]: dracut-dracut-053 Mar 7 01:32:12.322328 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:32:12.359251 systemd-resolved[217]: Positive Trust Anchors: Mar 7 01:32:12.359264 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:32:12.359292 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:32:12.366403 systemd-resolved[217]: Defaulting to hostname 'linux'. Mar 7 01:32:12.368067 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:32:12.369784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:32:12.409181 kernel: SCSI subsystem initialized Mar 7 01:32:12.418151 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:32:12.429349 kernel: iscsi: registered transport (tcp) Mar 7 01:32:12.450191 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:32:12.450238 kernel: QLogic iSCSI HBA Driver Mar 7 01:32:12.491824 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:32:12.497284 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:32:12.524325 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:32:12.524385 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:32:12.526570 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:32:12.575173 kernel: raid6: avx2x4 gen() 25665 MB/s Mar 7 01:32:12.593161 kernel: raid6: avx2x2 gen() 21370 MB/s Mar 7 01:32:12.611296 kernel: raid6: avx2x1 gen() 18981 MB/s Mar 7 01:32:12.611351 kernel: raid6: using algorithm avx2x4 gen() 25665 MB/s Mar 7 01:32:12.631494 kernel: raid6: .... xor() 2968 MB/s, rmw enabled Mar 7 01:32:12.631546 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:32:12.656163 kernel: xor: automatically using best checksumming function avx Mar 7 01:32:12.789180 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:32:12.804208 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:32:12.811277 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:32:12.825821 systemd-udevd[396]: Using default interface naming scheme 'v255'. Mar 7 01:32:12.830585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:32:12.839270 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:32:12.857388 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Mar 7 01:32:12.896557 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:32:12.906341 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:32:13.000735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:32:13.009281 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:32:13.032355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:32:13.038088 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:32:13.040254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:32:13.041955 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:32:13.048275 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:32:13.070876 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:32:13.087190 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:32:13.104231 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:32:13.115489 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:32:13.115520 kernel: AES CTR mode by8 optimization enabled Mar 7 01:32:13.120956 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:32:13.140161 kernel: libata version 3.00 loaded. Mar 7 01:32:13.143884 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:32:13.144032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:13.146360 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:32:13.147961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:32:13.148221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:13.149839 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:32:13.160424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:32:13.176196 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:32:13.181761 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:32:13.186580 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:32:13.186791 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:32:13.379552 kernel: scsi host1: ahci Mar 7 01:32:13.379847 kernel: scsi host2: ahci Mar 7 01:32:13.380017 kernel: scsi host3: ahci Mar 7 01:32:13.380197 kernel: scsi host4: ahci Mar 7 01:32:13.385776 kernel: scsi host5: ahci Mar 7 01:32:13.385971 kernel: scsi host6: ahci Mar 7 01:32:13.391489 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 Mar 7 01:32:13.391520 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 Mar 7 01:32:13.391541 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 Mar 7 01:32:13.391560 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 Mar 7 01:32:13.391578 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 Mar 7 01:32:13.391595 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 Mar 7 01:32:13.534680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:13.545354 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:32:13.565749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:13.694608 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.694707 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.705866 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.705927 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.734158 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.734206 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:32:13.750462 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:32:13.777429 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:32:13.777815 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:32:13.779428 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:32:13.779607 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:32:13.789003 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:32:13.789032 kernel: GPT:9289727 != 167739391 Mar 7 01:32:13.789044 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:32:13.792738 kernel: GPT:9289727 != 167739391 Mar 7 01:32:13.792758 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:32:13.795406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:13.799151 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:32:13.836161 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (444) Mar 7 01:32:13.841806 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (463) Mar 7 01:32:13.848343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:32:13.861185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:32:13.869482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:32:13.876345 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:32:13.877325 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:32:13.885285 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:32:13.891253 disk-uuid[566]: Primary Header is updated. Mar 7 01:32:13.891253 disk-uuid[566]: Secondary Entries is updated. Mar 7 01:32:13.891253 disk-uuid[566]: Secondary Header is updated. Mar 7 01:32:13.897168 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:13.904171 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:14.907178 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:32:14.910112 disk-uuid[567]: The operation has completed successfully. Mar 7 01:32:14.990536 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:32:14.990723 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:32:15.012308 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:32:15.020386 sh[581]: Success Mar 7 01:32:15.039185 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:32:15.095387 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:32:15.106418 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:32:15.109464 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:32:15.140346 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:32:15.140421 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:15.140443 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:32:15.145215 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:32:15.148206 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:32:15.160158 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:32:15.163343 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:32:15.165004 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:32:15.188517 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:32:15.193228 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:32:15.207899 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:15.207942 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:15.212791 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:32:15.223889 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:32:15.223925 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:32:15.237095 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:32:15.243351 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:15.249770 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:32:15.258801 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:32:15.342411 ignition[687]: Ignition 2.19.0 Mar 7 01:32:15.343277 ignition[687]: Stage: fetch-offline Mar 7 01:32:15.343084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:32:15.343329 ignition[687]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:15.343341 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:15.344096 ignition[687]: parsed url from cmdline: "" Mar 7 01:32:15.344102 ignition[687]: no config URL provided Mar 7 01:32:15.344109 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:32:15.344122 ignition[687]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:32:15.351312 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:32:15.344149 ignition[687]: failed to fetch config: resource requires networking Mar 7 01:32:15.352944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:32:15.344354 ignition[687]: Ignition finished successfully Mar 7 01:32:15.394754 systemd-networkd[767]: lo: Link UP Mar 7 01:32:15.394765 systemd-networkd[767]: lo: Gained carrier Mar 7 01:32:15.397089 systemd-networkd[767]: Enumeration completed Mar 7 01:32:15.397628 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:32:15.397633 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:32:15.399601 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:32:15.399990 systemd-networkd[767]: eth0: Link UP Mar 7 01:32:15.399996 systemd-networkd[767]: eth0: Gained carrier Mar 7 01:32:15.400017 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:32:15.401973 systemd[1]: Reached target network.target - Network. Mar 7 01:32:15.411302 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:32:15.426018 ignition[770]: Ignition 2.19.0 Mar 7 01:32:15.426032 ignition[770]: Stage: fetch Mar 7 01:32:15.426238 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:15.426251 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:15.426333 ignition[770]: parsed url from cmdline: "" Mar 7 01:32:15.426337 ignition[770]: no config URL provided Mar 7 01:32:15.426343 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:32:15.426352 ignition[770]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:32:15.426376 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:32:15.426516 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:32:15.627027 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:32:15.627214 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:32:16.027921 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:32:16.028223 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:32:16.221224 systemd-networkd[767]: eth0: DHCPv4 address 172.238.171.132/24, gateway 172.238.171.1 acquired from 23.194.118.60 Mar 7 01:32:16.828852 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:32:16.926566 ignition[770]: PUT result: OK Mar 7 01:32:16.926640 ignition[770]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:32:16.964473 systemd-networkd[767]: eth0: Gained IPv6LL Mar 7 01:32:17.036303 ignition[770]: GET result: OK Mar 7 01:32:17.036446 ignition[770]: parsing config with SHA512: d2676ab1b62f08783c0f7bd69e294da0abfcc8fdf79dbd85711ae425d76de7ff3fb6316fe567979edce0ad639372388c349448906ff35ba7e684292db59a1eb9 Mar 7 01:32:17.041644 unknown[770]: fetched base config from "system" Mar 7 01:32:17.041664 unknown[770]: fetched base config from "system" Mar 7 01:32:17.042273 ignition[770]: fetch: fetch complete Mar 7 01:32:17.041674 unknown[770]: fetched user config from "akamai" Mar 7 01:32:17.042286 ignition[770]: fetch: fetch passed Mar 7 01:32:17.045858 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:32:17.042354 ignition[770]: Ignition finished successfully Mar 7 01:32:17.053302 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:32:17.079194 ignition[778]: Ignition 2.19.0 Mar 7 01:32:17.079213 ignition[778]: Stage: kargs Mar 7 01:32:17.079398 ignition[778]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:17.079411 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:17.083554 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:32:17.080294 ignition[778]: kargs: kargs passed Mar 7 01:32:17.080343 ignition[778]: Ignition finished successfully Mar 7 01:32:17.091456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:32:17.104991 ignition[784]: Ignition 2.19.0 Mar 7 01:32:17.105005 ignition[784]: Stage: disks Mar 7 01:32:17.105179 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:17.105191 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:17.108099 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:32:17.105899 ignition[784]: disks: disks passed Mar 7 01:32:17.131873 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:32:17.105947 ignition[784]: Ignition finished successfully Mar 7 01:32:17.133456 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:32:17.134977 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:32:17.136404 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:32:17.137997 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:32:17.147304 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:32:17.164612 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:32:17.167725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:32:17.176271 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:32:17.266169 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:32:17.266715 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:32:17.268297 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:32:17.279228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:32:17.282726 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:32:17.283777 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:32:17.283823 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:32:17.283852 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:32:17.290545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:32:17.300159 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (800) Mar 7 01:32:17.307152 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:17.307180 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:17.305298 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:32:17.312107 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:32:17.316505 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:32:17.316530 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:32:17.321425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:32:17.356501 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:32:17.363271 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:32:17.369303 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:32:17.373470 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:32:17.469375 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:32:17.481239 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:32:17.486453 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:32:17.493778 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:32:17.495382 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:17.518950 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:32:17.520990 ignition[913]: INFO : Ignition 2.19.0 Mar 7 01:32:17.520990 ignition[913]: INFO : Stage: mount Mar 7 01:32:17.520990 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:17.520990 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:17.524527 ignition[913]: INFO : mount: mount passed Mar 7 01:32:17.524527 ignition[913]: INFO : Ignition finished successfully Mar 7 01:32:17.524594 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:32:17.530228 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:32:18.272456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:32:18.286161 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (926) Mar 7 01:32:18.290529 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:32:18.290551 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:32:18.293236 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:32:18.299521 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:32:18.299545 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:32:18.303658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:32:18.327524 ignition[942]: INFO : Ignition 2.19.0 Mar 7 01:32:18.327524 ignition[942]: INFO : Stage: files Mar 7 01:32:18.329528 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:18.329528 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:18.329528 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:32:18.329528 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:32:18.329528 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:32:18.334803 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:32:18.334803 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:32:18.336942 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:32:18.336942 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:32:18.335270 unknown[942]: wrote ssh authorized keys file for user: core Mar 7 01:32:18.650789 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:32:18.825837 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:32:18.825837 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:18.829214 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:32:19.298218 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:32:19.778002 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:32:19.778002 ignition[942]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:32:19.809694 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:32:19.809694 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:32:19.809694 ignition[942]: INFO : files: files passed Mar 7 01:32:19.809694 ignition[942]: INFO : Ignition finished successfully Mar 7 01:32:19.783955 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:32:19.818367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:32:19.824969 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:32:19.829201 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:32:19.829992 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:32:19.851437 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:32:19.853802 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:32:19.855113 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:32:19.854498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:32:19.856685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:32:19.863346 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:32:19.900879 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:32:19.901032 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:32:19.904496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:32:19.905623 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:32:19.907507 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:32:19.913316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:32:19.933374 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:32:19.939482 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:32:19.954486 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:32:19.956840 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:32:19.959043 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:32:19.960100 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:32:19.960332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:32:19.962720 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:32:19.964019 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:32:19.965667 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:32:19.967502 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:32:19.969067 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:32:19.971085 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:32:19.972674 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:32:19.974616 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:32:19.976764 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:32:19.978477 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:32:19.980302 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:32:19.980415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:32:19.982558 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:32:19.983745 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:32:19.985511 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:32:19.986012 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:32:19.987483 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:32:19.987594 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:32:19.989726 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:32:19.989840 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:32:19.991039 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:32:19.991159 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:32:19.998341 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:32:20.001051 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:32:20.001962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:32:20.005351 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:32:20.007638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:32:20.007742 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:32:20.018698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:32:20.018820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:32:20.025146 ignition[996]: INFO : Ignition 2.19.0 Mar 7 01:32:20.025146 ignition[996]: INFO : Stage: umount Mar 7 01:32:20.025146 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:32:20.025146 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:32:20.035619 ignition[996]: INFO : umount: umount passed Mar 7 01:32:20.035619 ignition[996]: INFO : Ignition finished successfully Mar 7 01:32:20.030605 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:32:20.030735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:32:20.032073 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:32:20.032170 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:32:20.033381 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:32:20.033434 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:32:20.036361 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:32:20.036412 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:32:20.038814 systemd[1]: Stopped target network.target - Network. Mar 7 01:32:20.064782 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:32:20.064879 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:32:20.066670 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:32:20.068259 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:32:20.073176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:32:20.074493 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:32:20.076040 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:32:20.077684 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:32:20.077741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:32:20.079738 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:32:20.079791 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:32:20.081777 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:32:20.081833 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:32:20.083303 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:32:20.083352 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:32:20.085363 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:32:20.087662 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:32:20.089194 systemd-networkd[767]: eth0: DHCPv6 lease lost Mar 7 01:32:20.093052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:32:20.094011 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:32:20.094507 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:32:20.098673 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:32:20.098809 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:32:20.103520 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:32:20.103697 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:32:20.106774 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:32:20.106855 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:32:20.109273 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:32:20.109331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:32:20.117443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:32:20.118517 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:32:20.118577 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:32:20.123228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:32:20.123472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:32:20.125432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:32:20.125487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:32:20.127315 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:32:20.127366 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:32:20.129686 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:32:20.155402 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:32:20.155622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:32:20.156930 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:32:20.156983 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:32:20.160240 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:32:20.160282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:32:20.161417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:32:20.161478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:32:20.163307 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:32:20.163361 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:32:20.165804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:32:20.165859 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:32:20.173331 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:32:20.174332 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:32:20.174423 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:32:20.179004 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:32:20.179082 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:32:20.182574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:32:20.182630 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:32:20.184568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:32:20.184623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:20.188303 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:32:20.188431 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:32:20.197847 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:32:20.197984 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:32:20.200569 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:32:20.208279 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:32:20.221650 systemd[1]: Switching root. Mar 7 01:32:20.257458 systemd-journald[178]: Journal stopped Mar 7 01:32:21.696989 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Mar 7 01:32:21.697019 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:32:21.697032 kernel: SELinux: policy capability open_perms=1 Mar 7 01:32:21.697041 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:32:21.697054 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:32:21.697063 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:32:21.697073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:32:21.697083 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:32:21.697092 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:32:21.697101 kernel: audit: type=1403 audit(1772847140.485:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:32:21.697111 systemd[1]: Successfully loaded SELinux policy in 61.356ms. Mar 7 01:32:21.697126 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.625ms. Mar 7 01:32:21.697160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:32:21.697172 systemd[1]: Detected virtualization kvm. Mar 7 01:32:21.697183 systemd[1]: Detected architecture x86-64. Mar 7 01:32:21.697195 systemd[1]: Detected first boot. Mar 7 01:32:21.697209 systemd[1]: Initializing machine ID from random generator. Mar 7 01:32:21.697219 zram_generator::config[1055]: No configuration found. Mar 7 01:32:21.697230 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:32:21.697242 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:32:21.697258 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:32:21.697270 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:32:21.697280 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:32:21.697295 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:32:21.697311 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:32:21.697328 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:32:21.697338 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:32:21.697349 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:32:21.697359 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:32:21.697369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:32:21.697383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:32:21.697394 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:32:21.697404 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:32:21.697415 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:32:21.697425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:32:21.697435 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:32:21.697447 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:32:21.697457 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:32:21.697470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:32:21.697481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:32:21.697494 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:32:21.697505 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:32:21.697515 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:32:21.697526 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:32:21.697536 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:32:21.697547 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:32:21.697560 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:32:21.697570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:32:21.697581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:32:21.697595 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:32:21.697611 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:32:21.697632 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:32:21.697643 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:32:21.697654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:32:21.697664 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:32:21.697675 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:32:21.697686 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:32:21.697697 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:32:21.697708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:32:21.697721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:32:21.697732 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:32:21.697743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:32:21.697753 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:32:21.697764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:32:21.697774 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:32:21.697785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:32:21.697796 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:32:21.697809 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 01:32:21.697820 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 01:32:21.697831 kernel: fuse: init (API version 7.39) Mar 7 01:32:21.697841 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:32:21.697851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:32:21.697862 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:32:21.697873 kernel: loop: module loaded Mar 7 01:32:21.697883 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:32:21.697897 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:32:21.697929 systemd-journald[1154]: Collecting audit messages is disabled. Mar 7 01:32:21.697950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:32:21.697963 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:32:21.697977 systemd-journald[1154]: Journal started Mar 7 01:32:21.697997 systemd-journald[1154]: Runtime Journal (/run/log/journal/4cf11c19bd114037ab879677379129d2) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:32:21.708972 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:32:21.710565 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:32:21.711599 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:32:21.712827 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:32:21.714316 kernel: ACPI: bus type drm_connector registered Mar 7 01:32:21.717579 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:32:21.718733 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:32:21.719958 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:32:21.721421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:32:21.722939 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:32:21.723195 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:32:21.724959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:32:21.725270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:32:21.726633 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:32:21.726917 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:32:21.728066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:32:21.728511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:32:21.729862 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:32:21.730606 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:32:21.732234 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:32:21.732593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:32:21.734155 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:32:21.736043 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:32:21.737481 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:32:21.779074 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:32:21.788257 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:32:21.796215 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:32:21.798282 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:32:21.803567 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:32:21.816830 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:32:21.820300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:32:21.829294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:32:21.831847 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:32:21.841556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:32:21.856271 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:32:21.866739 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:32:21.868615 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:32:21.883414 systemd-journald[1154]: Time spent on flushing to /var/log/journal/4cf11c19bd114037ab879677379129d2 is 52.980ms for 965 entries. Mar 7 01:32:21.883414 systemd-journald[1154]: System Journal (/var/log/journal/4cf11c19bd114037ab879677379129d2) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:32:21.959370 systemd-journald[1154]: Received client request to flush runtime journal. Mar 7 01:32:21.893656 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:32:21.899603 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:32:21.936888 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:32:21.944993 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 7 01:32:21.945006 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Mar 7 01:32:21.945727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:32:21.958271 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:32:21.963997 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:32:21.975811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:32:21.990256 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:32:21.995576 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:32:22.026990 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:32:22.037383 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:32:22.064344 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Mar 7 01:32:22.064773 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Mar 7 01:32:22.073641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:32:22.322821 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:32:22.330257 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:32:22.361309 systemd-udevd[1228]: Using default interface naming scheme 'v255'. Mar 7 01:32:22.389212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:32:22.400453 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:32:22.421450 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:32:22.463015 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 7 01:32:22.522433 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:32:22.577172 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:32:22.593163 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:32:22.593433 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:32:22.593643 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:32:22.601270 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:32:22.607187 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:32:22.635607 systemd-networkd[1233]: lo: Link UP Mar 7 01:32:22.635927 systemd-networkd[1233]: lo: Gained carrier Mar 7 01:32:22.637863 systemd-networkd[1233]: Enumeration completed Mar 7 01:32:22.638049 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:32:22.638690 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:32:22.640105 systemd-networkd[1233]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:32:22.642040 systemd-networkd[1233]: eth0: Link UP Mar 7 01:32:22.642313 systemd-networkd[1233]: eth0: Gained carrier Mar 7 01:32:22.642370 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:32:22.646261 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:32:22.655021 systemd-networkd[1233]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:32:22.666190 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:32:22.672334 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1245) Mar 7 01:32:22.714163 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:32:22.737472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:32:22.744399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:32:22.745959 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:32:22.756374 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:32:22.767355 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:32:22.795152 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:32:22.796729 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:32:22.803339 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:32:22.903928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:32:22.912901 lvm[1277]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:32:22.952100 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:32:22.953851 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:32:22.955003 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:32:22.955122 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:32:22.956446 systemd[1]: Reached target machines.target - Containers. Mar 7 01:32:22.958989 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:32:22.965470 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:32:22.970311 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:32:22.971650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:32:22.975498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:32:22.979291 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:32:22.989092 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:32:22.995605 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:32:23.010497 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:32:23.012797 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:32:23.020362 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:32:23.025165 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 01:32:23.054241 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:32:23.074157 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 01:32:23.107214 kernel: loop2: detected capacity change from 0 to 8 Mar 7 01:32:23.122160 kernel: loop3: detected capacity change from 0 to 228704 Mar 7 01:32:23.163164 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:32:23.183705 kernel: loop5: detected capacity change from 0 to 142488 Mar 7 01:32:23.204185 kernel: loop6: detected capacity change from 0 to 8 Mar 7 01:32:23.209193 kernel: loop7: detected capacity change from 0 to 228704 Mar 7 01:32:23.223210 (sd-merge)[1300]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 7 01:32:23.224654 (sd-merge)[1300]: Merged extensions into '/usr'. Mar 7 01:32:23.230828 systemd[1]: Reloading requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:32:23.230954 systemd[1]: Reloading... Mar 7 01:32:23.328170 zram_generator::config[1325]: No configuration found. Mar 7 01:32:23.432199 systemd-networkd[1233]: eth0: DHCPv4 address 172.238.171.132/24, gateway 172.238.171.1 acquired from 23.194.118.60 Mar 7 01:32:23.437734 ldconfig[1283]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:32:23.480719 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:32:23.544901 systemd[1]: Reloading finished in 312 ms. Mar 7 01:32:23.565467 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:32:23.568938 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:32:23.580651 systemd[1]: Starting ensure-sysext.service... Mar 7 01:32:23.585519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:32:23.596253 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:32:23.596273 systemd[1]: Reloading... Mar 7 01:32:23.621931 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:32:23.622352 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:32:23.623353 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:32:23.623637 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Mar 7 01:32:23.623723 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Mar 7 01:32:23.627746 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:32:23.627764 systemd-tmpfiles[1379]: Skipping /boot Mar 7 01:32:23.648533 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:32:23.648549 systemd-tmpfiles[1379]: Skipping /boot Mar 7 01:32:23.672349 zram_generator::config[1404]: No configuration found. Mar 7 01:32:23.883842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:32:23.968454 systemd[1]: Reloading finished in 371 ms. Mar 7 01:32:23.990892 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:32:24.010286 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:32:24.015284 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:32:24.020533 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:32:24.034799 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:32:24.042208 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:32:24.056512 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:32:24.056746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:32:24.066399 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:32:24.070049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:32:24.075666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:32:24.077359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:32:24.077469 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:32:24.093335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:32:24.097928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:32:24.105536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:32:24.106034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:32:24.116065 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:32:24.120370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:32:24.131740 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:32:24.136517 augenrules[1488]: No rules Mar 7 01:32:24.138325 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:32:24.152360 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:32:24.163259 systemd[1]: Finished ensure-sysext.service. Mar 7 01:32:24.166806 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:32:24.167526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:32:24.176326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:32:24.188854 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:32:24.193318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:32:24.197967 systemd-resolved[1468]: Positive Trust Anchors: Mar 7 01:32:24.197994 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:32:24.198025 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:32:24.206040 systemd-resolved[1468]: Defaulting to hostname 'linux'. Mar 7 01:32:24.208318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:32:24.209750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:32:24.216806 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:32:24.225102 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:32:24.226107 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:32:24.226962 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:32:24.228425 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:32:24.229655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:32:24.239663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:32:24.241407 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:32:24.241773 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:32:24.242945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:32:24.243252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:32:24.244629 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:32:24.244983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:32:24.252079 systemd[1]: Reached target network.target - Network. Mar 7 01:32:24.253922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:32:24.254812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:32:24.254957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:32:24.254990 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:32:24.257670 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:32:24.319762 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:32:24.321276 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:32:24.322307 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:32:24.323406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:32:24.324495 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:32:24.325667 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:32:24.325764 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:32:24.326732 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:32:24.328036 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:32:24.328962 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:32:24.329755 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:32:24.331528 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:32:24.335273 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:32:24.839963 systemd-resolved[1468]: Clock change detected. Flushing caches. Mar 7 01:32:24.840080 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:32:24.840572 systemd-timesyncd[1509]: Contacted time server 23.150.41.122:123 (0.flatcar.pool.ntp.org). Mar 7 01:32:24.840654 systemd-timesyncd[1509]: Initial clock synchronization to Sat 2026-03-07 01:32:24.839832 UTC. Mar 7 01:32:24.845650 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:32:24.846616 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:32:24.847351 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:32:24.848344 systemd[1]: System is tainted: cgroupsv1 Mar 7 01:32:24.848394 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:32:24.848422 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:32:24.850013 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:32:24.860059 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:32:24.863047 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:32:24.882022 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:32:24.889069 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:32:24.889959 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:32:24.893038 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:32:24.895162 jq[1528]: false Mar 7 01:32:24.899680 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:32:24.920080 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:32:24.924064 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:32:24.941896 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:32:24.944103 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:32:24.951400 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:32:24.959034 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:32:24.967450 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:32:24.969265 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:32:24.969734 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:32:24.970595 dbus-daemon[1526]: [system] SELinux support is enabled Mar 7 01:32:24.980170 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:32:24.980524 dbus-daemon[1526]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1233 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:32:24.984023 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:32:24.990116 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:32:24.990474 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:32:25.001046 jq[1548]: true Mar 7 01:32:25.025009 systemd-networkd[1233]: eth0: Gained IPv6LL Mar 7 01:32:25.032244 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:32:25.032292 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:32:25.033771 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:32:25.033790 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:32:25.035781 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:32:25.042213 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:32:25.042420 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:32:25.058620 extend-filesystems[1531]: Found loop4 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found loop5 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found loop6 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found loop7 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda1 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda2 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda3 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found usr Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda4 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda6 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda7 Mar 7 01:32:25.058620 extend-filesystems[1531]: Found sda9 Mar 7 01:32:25.058620 extend-filesystems[1531]: Checking size of /dev/sda9 Mar 7 01:32:25.051509 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:32:25.138164 tar[1553]: linux-amd64/LICENSE Mar 7 01:32:25.138164 tar[1553]: linux-amd64/helm Mar 7 01:32:25.195239 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 7 01:32:25.195337 update_engine[1543]: I20260307 01:32:25.064529 1543 main.cc:92] Flatcar Update Engine starting Mar 7 01:32:25.195337 update_engine[1543]: I20260307 01:32:25.072758 1543 update_check_scheduler.cc:74] Next update check in 9m4s Mar 7 01:32:25.195815 coreos-metadata[1525]: Mar 07 01:32:25.059 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:32:25.060320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:32:25.208538 extend-filesystems[1531]: Resized partition /dev/sda9 Mar 7 01:32:25.213087 jq[1556]: true Mar 7 01:32:25.214758 coreos-metadata[1525]: Mar 07 01:32:25.197 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 7 01:32:25.071533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:32:25.214849 extend-filesystems[1583]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:32:25.088371 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:32:25.092043 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:32:25.103011 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:32:25.112296 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:32:25.218309 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:32:25.267669 systemd-logind[1538]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:32:25.267706 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:32:25.271026 systemd-logind[1538]: New seat seat0. Mar 7 01:32:25.305655 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:32:25.388935 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1230) Mar 7 01:32:25.390931 coreos-metadata[1525]: Mar 07 01:32:25.390 INFO Fetch successful Mar 7 01:32:25.390931 coreos-metadata[1525]: Mar 07 01:32:25.390 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 7 01:32:25.395398 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:32:25.400422 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:32:25.422986 systemd[1]: Starting sshkeys.service... Mar 7 01:32:25.473744 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:32:25.485460 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:32:25.515488 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:32:25.515828 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:32:25.517988 dbus-daemon[1526]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1574 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:32:25.530323 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:32:25.609955 polkitd[1620]: Started polkitd version 121 Mar 7 01:32:25.625338 polkitd[1620]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:32:25.627788 polkitd[1620]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:32:25.634572 containerd[1558]: time="2026-03-07T01:32:25.629320150Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:32:25.637636 polkitd[1620]: Finished loading, compiling and executing 2 rules Mar 7 01:32:25.639698 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:32:25.640330 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:32:25.642657 polkitd[1620]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:32:25.651939 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 7 01:32:25.656684 coreos-metadata[1525]: Mar 07 01:32:25.656 INFO Fetch successful Mar 7 01:32:25.685089 systemd-hostnamed[1574]: Hostname set to <172-238-171-132> (transient) Mar 7 01:32:25.694887 extend-filesystems[1583]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 01:32:25.694887 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 7 01:32:25.694887 extend-filesystems[1583]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.689317750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.692977531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693009161Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693030171Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693271052Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693304762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693398502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693418332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693705972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693725182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:32:25.724598 containerd[1558]: time="2026-03-07T01:32:25.693741232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:32:25.685205 systemd-resolved[1468]: System hostname changed to '172-238-171-132'. Mar 7 01:32:25.725138 coreos-metadata[1617]: Mar 07 01:32:25.718 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:32:25.725458 extend-filesystems[1531]: Resized filesystem in /dev/sda9 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.693751502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.693847192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.694128042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.694320992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.696990743Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.697105943Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.697164023Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.707233599Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.707728719Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.707848679Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.707879889Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.707925329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:32:25.741407 containerd[1558]: time="2026-03-07T01:32:25.710235100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:32:25.701480 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710613220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710744860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710759960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710773650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710789070Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710809420Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710828360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710845860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710860000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710871970Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.710890130Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.716316763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.716365423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.742835 containerd[1558]: time="2026-03-07T01:32:25.716383203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.702988 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716397933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716419013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716437973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716474723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716498483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716519663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716539323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716560753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716581833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716607013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716630343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716654183Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716685933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716704373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744538 containerd[1558]: time="2026-03-07T01:32:25.716721933Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716782263Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716802073Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716819613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716840023Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716857543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716877293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716892143Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:32:25.744886 containerd[1558]: time="2026-03-07T01:32:25.716939303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:32:25.746355 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.717264224Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.717350714Z" level=info msg="Connect containerd service" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.717401294Z" level=info msg="using legacy CRI server" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.717413524Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.717549364Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.724486417Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.726225218Z" level=info msg="Start subscribing containerd event" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.726315778Z" level=info msg="Start recovering state" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.726395328Z" level=info msg="Start event monitor" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.726420038Z" level=info msg="Start snapshots syncer" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.726429428Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.726437528Z" level=info msg="Start streaming server" Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.735305273Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.735389553Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:32:25.751412 containerd[1558]: time="2026-03-07T01:32:25.744966547Z" level=info msg="containerd successfully booted in 0.117349s" Mar 7 01:32:25.789492 locksmithd[1579]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:32:25.846797 coreos-metadata[1617]: Mar 07 01:32:25.845 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 7 01:32:25.938240 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:32:25.946207 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:32:25.979292 coreos-metadata[1617]: Mar 07 01:32:25.979 INFO Fetch successful Mar 7 01:32:26.027222 update-ssh-keys[1672]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:32:26.034131 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:32:26.044377 systemd[1]: Finished sshkeys.service. Mar 7 01:32:26.228987 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:32:26.276735 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:32:26.292199 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:32:26.299101 tar[1553]: linux-amd64/README.md Mar 7 01:32:26.316350 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:32:26.316699 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:32:26.329443 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:32:26.331512 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:32:26.345430 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:32:26.352207 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:32:26.361455 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:32:26.365849 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:32:26.833108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:32:26.835573 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:32:26.837128 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:32:26.837981 systemd[1]: Startup finished in 9.873s (kernel) + 5.909s (userspace) = 15.783s. Mar 7 01:32:26.914603 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:32:26.921372 systemd[1]: Started sshd@0-172.238.171.132:22-68.220.241.50:54260.service - OpenSSH per-connection server daemon (68.220.241.50:54260). Mar 7 01:32:27.078441 sshd[1715]: Accepted publickey for core from 68.220.241.50 port 54260 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:32:27.080650 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:27.092300 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:32:27.098274 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:32:27.100105 systemd-logind[1538]: New session 1 of user core. Mar 7 01:32:27.123196 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:32:27.133175 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:32:27.143055 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:32:27.245915 systemd[1726]: Queued start job for default target default.target. Mar 7 01:32:27.246927 systemd[1726]: Created slice app.slice - User Application Slice. Mar 7 01:32:27.246951 systemd[1726]: Reached target paths.target - Paths. Mar 7 01:32:27.246965 systemd[1726]: Reached target timers.target - Timers. Mar 7 01:32:27.253998 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:32:27.277076 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:32:27.277152 systemd[1726]: Reached target sockets.target - Sockets. Mar 7 01:32:27.277167 systemd[1726]: Reached target basic.target - Basic System. Mar 7 01:32:27.277216 systemd[1726]: Reached target default.target - Main User Target. Mar 7 01:32:27.277255 systemd[1726]: Startup finished in 125ms. Mar 7 01:32:27.277865 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:32:27.283186 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:32:27.435339 systemd[1]: Started sshd@1-172.238.171.132:22-68.220.241.50:54272.service - OpenSSH per-connection server daemon (68.220.241.50:54272). Mar 7 01:32:27.472157 kubelet[1710]: E0307 01:32:27.472069 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:32:27.477237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:32:27.477576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:32:27.636392 sshd[1739]: Accepted publickey for core from 68.220.241.50 port 54272 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:32:27.639036 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:27.644715 systemd-logind[1538]: New session 2 of user core. Mar 7 01:32:27.651275 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:32:27.786524 sshd[1739]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:27.792238 systemd[1]: sshd@1-172.238.171.132:22-68.220.241.50:54272.service: Deactivated successfully. Mar 7 01:32:27.798554 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:32:27.799743 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:32:27.801307 systemd-logind[1538]: Removed session 2. Mar 7 01:32:27.820126 systemd[1]: Started sshd@2-172.238.171.132:22-68.220.241.50:54278.service - OpenSSH per-connection server daemon (68.220.241.50:54278). Mar 7 01:32:27.982593 sshd[1749]: Accepted publickey for core from 68.220.241.50 port 54278 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:32:27.983463 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:27.989506 systemd-logind[1538]: New session 3 of user core. Mar 7 01:32:27.999477 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:32:28.122340 sshd[1749]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:28.128250 systemd[1]: sshd@2-172.238.171.132:22-68.220.241.50:54278.service: Deactivated successfully. Mar 7 01:32:28.132780 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:32:28.133664 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:32:28.135003 systemd-logind[1538]: Removed session 3. Mar 7 01:32:28.147170 systemd[1]: Started sshd@3-172.238.171.132:22-68.220.241.50:54284.service - OpenSSH per-connection server daemon (68.220.241.50:54284). Mar 7 01:32:28.305958 sshd[1757]: Accepted publickey for core from 68.220.241.50 port 54284 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:32:28.308039 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:28.313343 systemd-logind[1538]: New session 4 of user core. Mar 7 01:32:28.320212 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:32:28.440822 sshd[1757]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:28.444437 systemd[1]: sshd@3-172.238.171.132:22-68.220.241.50:54284.service: Deactivated successfully. Mar 7 01:32:28.450340 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:32:28.451317 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:32:28.452454 systemd-logind[1538]: Removed session 4. Mar 7 01:32:28.472116 systemd[1]: Started sshd@4-172.238.171.132:22-68.220.241.50:54288.service - OpenSSH per-connection server daemon (68.220.241.50:54288). Mar 7 01:32:28.656942 sshd[1765]: Accepted publickey for core from 68.220.241.50 port 54288 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:32:28.658606 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:28.665590 systemd-logind[1538]: New session 5 of user core. Mar 7 01:32:28.672198 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:32:28.794480 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:32:28.794961 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:32:28.821986 sudo[1769]: pam_unix(sudo:session): session closed for user root Mar 7 01:32:28.849261 sshd[1765]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:28.858324 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:32:28.858710 systemd[1]: sshd@4-172.238.171.132:22-68.220.241.50:54288.service: Deactivated successfully. Mar 7 01:32:28.863649 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:32:28.864548 systemd-logind[1538]: Removed session 5. Mar 7 01:32:28.876228 systemd[1]: Started sshd@5-172.238.171.132:22-68.220.241.50:54298.service - OpenSSH per-connection server daemon (68.220.241.50:54298). Mar 7 01:32:29.034747 sshd[1774]: Accepted publickey for core from 68.220.241.50 port 54298 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:32:29.035557 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:29.042753 systemd-logind[1538]: New session 6 of user core. Mar 7 01:32:29.052352 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:32:29.156856 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:32:29.157495 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:32:29.163098 sudo[1779]: pam_unix(sudo:session): session closed for user root Mar 7 01:32:29.171514 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:32:29.171954 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:32:29.194371 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:32:29.198307 auditctl[1782]: No rules Mar 7 01:32:29.198827 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:32:29.199408 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:32:29.208613 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:32:29.245271 augenrules[1801]: No rules Mar 7 01:32:29.247743 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:32:29.250883 sudo[1778]: pam_unix(sudo:session): session closed for user root Mar 7 01:32:29.272770 sshd[1774]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:29.279671 systemd[1]: sshd@5-172.238.171.132:22-68.220.241.50:54298.service: Deactivated successfully. Mar 7 01:32:29.282644 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:32:29.283030 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:32:29.285244 systemd-logind[1538]: Removed session 6. Mar 7 01:32:29.312300 systemd[1]: Started sshd@6-172.238.171.132:22-68.220.241.50:54300.service - OpenSSH per-connection server daemon (68.220.241.50:54300). Mar 7 01:32:29.510016 sshd[1810]: Accepted publickey for core from 68.220.241.50 port 54300 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:32:29.512078 sshd[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:29.517168 systemd-logind[1538]: New session 7 of user core. Mar 7 01:32:29.523396 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:32:29.640196 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:32:29.640605 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:32:29.929355 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:32:29.934638 (dockerd)[1829]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:32:30.209629 dockerd[1829]: time="2026-03-07T01:32:30.208728868Z" level=info msg="Starting up" Mar 7 01:32:30.338661 dockerd[1829]: time="2026-03-07T01:32:30.338620943Z" level=info msg="Loading containers: start." Mar 7 01:32:30.439167 kernel: Initializing XFRM netlink socket Mar 7 01:32:30.523113 systemd-networkd[1233]: docker0: Link UP Mar 7 01:32:30.542457 dockerd[1829]: time="2026-03-07T01:32:30.542415534Z" level=info msg="Loading containers: done." Mar 7 01:32:30.557678 dockerd[1829]: time="2026-03-07T01:32:30.557629602Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:32:30.557838 dockerd[1829]: time="2026-03-07T01:32:30.557708572Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:32:30.557838 dockerd[1829]: time="2026-03-07T01:32:30.557810982Z" level=info msg="Daemon has completed initialization" Mar 7 01:32:30.589806 dockerd[1829]: time="2026-03-07T01:32:30.589754588Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:32:30.590239 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:32:31.078679 containerd[1558]: time="2026-03-07T01:32:31.078639972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:32:31.686640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314242332.mount: Deactivated successfully. Mar 7 01:32:32.711610 containerd[1558]: time="2026-03-07T01:32:32.711530108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:32.712764 containerd[1558]: time="2026-03-07T01:32:32.712559309Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116192" Mar 7 01:32:32.713210 containerd[1558]: time="2026-03-07T01:32:32.713144629Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:32.716366 containerd[1558]: time="2026-03-07T01:32:32.716313291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:32.717582 containerd[1558]: time="2026-03-07T01:32:32.717372401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.638686169s" Mar 7 01:32:32.717582 containerd[1558]: time="2026-03-07T01:32:32.717408491Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:32:32.718859 containerd[1558]: time="2026-03-07T01:32:32.718807002Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:32:33.913173 containerd[1558]: time="2026-03-07T01:32:33.913074139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:33.914219 containerd[1558]: time="2026-03-07T01:32:33.914185429Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021816" Mar 7 01:32:33.914714 containerd[1558]: time="2026-03-07T01:32:33.914643899Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:33.923531 containerd[1558]: time="2026-03-07T01:32:33.922605293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:33.923792 containerd[1558]: time="2026-03-07T01:32:33.923755134Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.204893482s" Mar 7 01:32:33.923825 containerd[1558]: time="2026-03-07T01:32:33.923798804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:32:33.925268 containerd[1558]: time="2026-03-07T01:32:33.925221815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:32:34.850803 containerd[1558]: time="2026-03-07T01:32:34.850736977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:34.851662 containerd[1558]: time="2026-03-07T01:32:34.851625128Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162752" Mar 7 01:32:34.851987 containerd[1558]: time="2026-03-07T01:32:34.851947138Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:34.854357 containerd[1558]: time="2026-03-07T01:32:34.854320939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:34.855402 containerd[1558]: time="2026-03-07T01:32:34.855238509Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 929.872504ms" Mar 7 01:32:34.855402 containerd[1558]: time="2026-03-07T01:32:34.855265349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:32:34.855812 containerd[1558]: time="2026-03-07T01:32:34.855772370Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:32:35.840076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129634290.mount: Deactivated successfully. Mar 7 01:32:36.192739 containerd[1558]: time="2026-03-07T01:32:36.192618308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:36.193983 containerd[1558]: time="2026-03-07T01:32:36.193942188Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828653" Mar 7 01:32:36.195597 containerd[1558]: time="2026-03-07T01:32:36.194769809Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:36.196658 containerd[1558]: time="2026-03-07T01:32:36.196628310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:36.197456 containerd[1558]: time="2026-03-07T01:32:36.197419700Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.34161912s" Mar 7 01:32:36.197513 containerd[1558]: time="2026-03-07T01:32:36.197457610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:32:36.198642 containerd[1558]: time="2026-03-07T01:32:36.198344470Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:32:36.721456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542459593.mount: Deactivated successfully. Mar 7 01:32:37.424683 containerd[1558]: time="2026-03-07T01:32:37.424629063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:37.425867 containerd[1558]: time="2026-03-07T01:32:37.425837704Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Mar 7 01:32:37.426555 containerd[1558]: time="2026-03-07T01:32:37.426512934Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:37.429107 containerd[1558]: time="2026-03-07T01:32:37.429070535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:37.430137 containerd[1558]: time="2026-03-07T01:32:37.430114656Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.231745426s" Mar 7 01:32:37.430280 containerd[1558]: time="2026-03-07T01:32:37.430187676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:32:37.431119 containerd[1558]: time="2026-03-07T01:32:37.431079456Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:32:37.526653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:32:37.536574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:32:37.711333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:32:37.725279 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:32:37.779118 kubelet[2103]: E0307 01:32:37.779066 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:32:37.785674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:32:37.786776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:32:37.967436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820389461.mount: Deactivated successfully. Mar 7 01:32:37.970781 containerd[1558]: time="2026-03-07T01:32:37.969829406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:37.970781 containerd[1558]: time="2026-03-07T01:32:37.970750356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Mar 7 01:32:37.971389 containerd[1558]: time="2026-03-07T01:32:37.971162726Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:37.978947 containerd[1558]: time="2026-03-07T01:32:37.978901600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:37.980025 containerd[1558]: time="2026-03-07T01:32:37.979989151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.875565ms" Mar 7 01:32:37.980086 containerd[1558]: time="2026-03-07T01:32:37.980031021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:32:37.980878 containerd[1558]: time="2026-03-07T01:32:37.980860841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:32:38.468545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774373949.mount: Deactivated successfully. Mar 7 01:32:39.335522 containerd[1558]: time="2026-03-07T01:32:39.335477918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:39.336456 containerd[1558]: time="2026-03-07T01:32:39.336426518Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718846" Mar 7 01:32:39.337005 containerd[1558]: time="2026-03-07T01:32:39.336967269Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:39.340925 containerd[1558]: time="2026-03-07T01:32:39.339431830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:39.341257 containerd[1558]: time="2026-03-07T01:32:39.341233331Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.3602592s" Mar 7 01:32:39.341320 containerd[1558]: time="2026-03-07T01:32:39.341258041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:32:44.051097 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:32:44.059185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:32:44.091112 systemd[1]: Reloading requested from client PID 2211 ('systemctl') (unit session-7.scope)... Mar 7 01:32:44.091129 systemd[1]: Reloading... Mar 7 01:32:44.245953 zram_generator::config[2249]: No configuration found. Mar 7 01:32:44.392544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:32:44.467871 systemd[1]: Reloading finished in 376 ms. Mar 7 01:32:44.523200 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:32:44.523312 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:32:44.523736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:32:44.532417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:32:44.685085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:32:44.685544 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:32:44.720144 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:32:44.720144 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:32:44.720144 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:32:44.720822 kubelet[2317]: I0307 01:32:44.720774 2317 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:32:45.144124 kubelet[2317]: I0307 01:32:45.144078 2317 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:32:45.144124 kubelet[2317]: I0307 01:32:45.144108 2317 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:32:45.144516 kubelet[2317]: I0307 01:32:45.144505 2317 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:32:45.172930 kubelet[2317]: E0307 01:32:45.172360 2317 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.171.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:32:45.177336 kubelet[2317]: I0307 01:32:45.176454 2317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:32:45.186108 kubelet[2317]: E0307 01:32:45.186068 2317 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:32:45.186108 kubelet[2317]: I0307 01:32:45.186097 2317 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:32:45.189614 kubelet[2317]: I0307 01:32:45.189594 2317 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:32:45.190787 kubelet[2317]: I0307 01:32:45.190745 2317 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:32:45.190933 kubelet[2317]: I0307 01:32:45.190771 2317 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-171-132","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:32:45.190933 kubelet[2317]: I0307 01:32:45.190925 2317 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:32:45.191056 kubelet[2317]: I0307 01:32:45.190937 2317 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:32:45.191076 kubelet[2317]: I0307 01:32:45.191056 2317 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:32:45.195606 kubelet[2317]: I0307 01:32:45.195581 2317 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:32:45.195606 kubelet[2317]: I0307 01:32:45.195601 2317 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:32:45.195942 kubelet[2317]: I0307 01:32:45.195627 2317 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:32:45.197516 kubelet[2317]: I0307 01:32:45.197308 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:32:45.201779 kubelet[2317]: E0307 01:32:45.201080 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.171.132:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-171-132&limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:32:45.201779 kubelet[2317]: E0307 01:32:45.201482 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.171.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:32:45.201999 kubelet[2317]: I0307 01:32:45.201969 2317 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:32:45.202709 kubelet[2317]: I0307 01:32:45.202555 2317 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:32:45.204010 kubelet[2317]: W0307 01:32:45.203580 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:32:45.208993 kubelet[2317]: I0307 01:32:45.208969 2317 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:32:45.209034 kubelet[2317]: I0307 01:32:45.209025 2317 server.go:1289] "Started kubelet" Mar 7 01:32:45.209458 kubelet[2317]: I0307 01:32:45.209256 2317 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:32:45.210297 kubelet[2317]: I0307 01:32:45.210233 2317 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:32:45.216677 kubelet[2317]: I0307 01:32:45.216623 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:32:45.218473 kubelet[2317]: I0307 01:32:45.217089 2317 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:32:45.219437 kubelet[2317]: I0307 01:32:45.219405 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:32:45.221025 kubelet[2317]: E0307 01:32:45.218734 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.171.132:6443/api/v1/namespaces/default/events\": dial tcp 172.238.171.132:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-171-132.189a6b1a93570e1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-171-132,UID:172-238-171-132,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-171-132,},FirstTimestamp:2026-03-07 01:32:45.208989213 +0000 UTC m=+0.519650551,LastTimestamp:2026-03-07 01:32:45.208989213 +0000 UTC m=+0.519650551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-171-132,}" Mar 7 01:32:45.222540 kubelet[2317]: I0307 01:32:45.222517 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:32:45.223543 kubelet[2317]: I0307 01:32:45.223524 2317 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:32:45.225869 kubelet[2317]: I0307 01:32:45.223651 2317 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:32:45.225950 kubelet[2317]: E0307 01:32:45.224676 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:45.226040 kubelet[2317]: I0307 01:32:45.226028 2317 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:32:45.226411 kubelet[2317]: E0307 01:32:45.226391 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.171.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-171-132?timeout=10s\": dial tcp 172.238.171.132:6443: connect: connection refused" interval="200ms" Mar 7 01:32:45.229247 kubelet[2317]: E0307 01:32:45.229226 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.171.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:32:45.229624 kubelet[2317]: E0307 01:32:45.229610 2317 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:32:45.229895 kubelet[2317]: I0307 01:32:45.229882 2317 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:32:45.230011 kubelet[2317]: I0307 01:32:45.230000 2317 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:32:45.230139 kubelet[2317]: I0307 01:32:45.230007 2317 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:32:45.230395 kubelet[2317]: I0307 01:32:45.230119 2317 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:32:45.261197 kubelet[2317]: I0307 01:32:45.261119 2317 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:32:45.261197 kubelet[2317]: I0307 01:32:45.261142 2317 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:32:45.261511 kubelet[2317]: I0307 01:32:45.261382 2317 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:32:45.261511 kubelet[2317]: I0307 01:32:45.261403 2317 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:32:45.261667 kubelet[2317]: E0307 01:32:45.261478 2317 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:32:45.264629 kubelet[2317]: E0307 01:32:45.264608 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.171.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:32:45.266679 kubelet[2317]: I0307 01:32:45.266558 2317 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:32:45.266679 kubelet[2317]: I0307 01:32:45.266570 2317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:32:45.266679 kubelet[2317]: I0307 01:32:45.266585 2317 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:32:45.268404 kubelet[2317]: I0307 01:32:45.268384 2317 policy_none.go:49] "None policy: Start" Mar 7 01:32:45.268452 kubelet[2317]: I0307 01:32:45.268409 2317 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:32:45.268452 kubelet[2317]: I0307 01:32:45.268425 2317 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:32:45.276727 kubelet[2317]: E0307 01:32:45.276130 2317 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:32:45.276727 kubelet[2317]: I0307 01:32:45.276282 2317 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:32:45.276727 kubelet[2317]: I0307 01:32:45.276291 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:32:45.277538 kubelet[2317]: I0307 01:32:45.277523 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:32:45.281731 kubelet[2317]: E0307 01:32:45.281508 2317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:32:45.281731 kubelet[2317]: E0307 01:32:45.281546 2317 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-171-132\" not found" Mar 7 01:32:45.368181 kubelet[2317]: E0307 01:32:45.368152 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:45.374931 kubelet[2317]: E0307 01:32:45.373843 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:45.376559 kubelet[2317]: E0307 01:32:45.376540 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:45.381131 kubelet[2317]: I0307 01:32:45.381090 2317 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-132" Mar 7 01:32:45.381606 kubelet[2317]: E0307 01:32:45.381571 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.171.132:6443/api/v1/nodes\": dial tcp 172.238.171.132:6443: connect: connection refused" node="172-238-171-132" Mar 7 01:32:45.427261 kubelet[2317]: I0307 01:32:45.427130 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/782a523cbb0f7eb9f3b36916df02eaf6-k8s-certs\") pod \"kube-apiserver-172-238-171-132\" (UID: \"782a523cbb0f7eb9f3b36916df02eaf6\") " pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:45.427261 kubelet[2317]: I0307 01:32:45.427165 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-ca-certs\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:45.427261 kubelet[2317]: I0307 01:32:45.427183 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-k8s-certs\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:45.427261 kubelet[2317]: I0307 01:32:45.427197 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:45.427261 kubelet[2317]: I0307 01:32:45.427216 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/782a523cbb0f7eb9f3b36916df02eaf6-ca-certs\") pod \"kube-apiserver-172-238-171-132\" (UID: \"782a523cbb0f7eb9f3b36916df02eaf6\") " pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:45.427506 kubelet[2317]: I0307 01:32:45.427231 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/782a523cbb0f7eb9f3b36916df02eaf6-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-171-132\" (UID: \"782a523cbb0f7eb9f3b36916df02eaf6\") " pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:45.427506 kubelet[2317]: I0307 01:32:45.427245 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-flexvolume-dir\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:45.427506 kubelet[2317]: I0307 01:32:45.427258 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-kubeconfig\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:45.427506 kubelet[2317]: I0307 01:32:45.427271 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/813c7806336547103107a4975f54fb0f-kubeconfig\") pod \"kube-scheduler-172-238-171-132\" (UID: \"813c7806336547103107a4975f54fb0f\") " pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:45.429595 kubelet[2317]: E0307 01:32:45.428605 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.171.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-171-132?timeout=10s\": dial tcp 172.238.171.132:6443: connect: connection refused" interval="400ms" Mar 7 01:32:45.584049 kubelet[2317]: I0307 01:32:45.584012 2317 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-132" Mar 7 01:32:45.584418 kubelet[2317]: E0307 01:32:45.584391 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.171.132:6443/api/v1/nodes\": dial tcp 172.238.171.132:6443: connect: connection refused" node="172-238-171-132" Mar 7 01:32:45.669355 kubelet[2317]: E0307 01:32:45.669314 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:45.670116 containerd[1558]: time="2026-03-07T01:32:45.670077843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-171-132,Uid:782a523cbb0f7eb9f3b36916df02eaf6,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:45.675028 kubelet[2317]: E0307 01:32:45.675006 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:45.675701 containerd[1558]: time="2026-03-07T01:32:45.675455216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-171-132,Uid:a5e6318ec347d66b2d7def3aaa65cb5b,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:45.677743 kubelet[2317]: E0307 01:32:45.677643 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:45.677933 containerd[1558]: time="2026-03-07T01:32:45.677887527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-171-132,Uid:813c7806336547103107a4975f54fb0f,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:45.830005 kubelet[2317]: E0307 01:32:45.829966 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.171.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-171-132?timeout=10s\": dial tcp 172.238.171.132:6443: connect: connection refused" interval="800ms" Mar 7 01:32:45.985890 kubelet[2317]: I0307 01:32:45.985773 2317 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-132" Mar 7 01:32:45.986201 kubelet[2317]: E0307 01:32:45.986097 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.171.132:6443/api/v1/nodes\": dial tcp 172.238.171.132:6443: connect: connection refused" node="172-238-171-132" Mar 7 01:32:46.145875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588803831.mount: Deactivated successfully. Mar 7 01:32:46.152760 containerd[1558]: time="2026-03-07T01:32:46.152690514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:32:46.154034 containerd[1558]: time="2026-03-07T01:32:46.153928485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:32:46.155929 containerd[1558]: time="2026-03-07T01:32:46.154768895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:32:46.155929 containerd[1558]: time="2026-03-07T01:32:46.155711706Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:32:46.156516 containerd[1558]: time="2026-03-07T01:32:46.156417036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:32:46.157211 containerd[1558]: time="2026-03-07T01:32:46.157108216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Mar 7 01:32:46.157211 containerd[1558]: time="2026-03-07T01:32:46.157164486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:32:46.160816 containerd[1558]: time="2026-03-07T01:32:46.160492638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:32:46.162774 containerd[1558]: time="2026-03-07T01:32:46.162437539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.483352ms" Mar 7 01:32:46.164929 containerd[1558]: time="2026-03-07T01:32:46.164878820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.525357ms" Mar 7 01:32:46.171342 containerd[1558]: time="2026-03-07T01:32:46.171290383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.760337ms" Mar 7 01:32:46.258087 kubelet[2317]: E0307 01:32:46.257873 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.171.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:32:46.268494 kubelet[2317]: E0307 01:32:46.268160 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.171.132:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-171-132&limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:32:46.291761 containerd[1558]: time="2026-03-07T01:32:46.290892633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:32:46.291761 containerd[1558]: time="2026-03-07T01:32:46.291640124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:32:46.291761 containerd[1558]: time="2026-03-07T01:32:46.291652344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:46.292056 containerd[1558]: time="2026-03-07T01:32:46.291742424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:46.297700 containerd[1558]: time="2026-03-07T01:32:46.297502827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:32:46.297700 containerd[1558]: time="2026-03-07T01:32:46.297577597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:32:46.297700 containerd[1558]: time="2026-03-07T01:32:46.297594797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:46.299835 containerd[1558]: time="2026-03-07T01:32:46.299673778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:46.302863 containerd[1558]: time="2026-03-07T01:32:46.302782589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:32:46.303327 containerd[1558]: time="2026-03-07T01:32:46.302972809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:32:46.303327 containerd[1558]: time="2026-03-07T01:32:46.303037459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:46.303898 containerd[1558]: time="2026-03-07T01:32:46.303861900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:46.366854 kubelet[2317]: E0307 01:32:46.365774 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.171.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:32:46.396967 containerd[1558]: time="2026-03-07T01:32:46.395734546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-171-132,Uid:782a523cbb0f7eb9f3b36916df02eaf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c33108284f79020aa9530cd33d33eb8465393a4966a393cf3b750e0043d1b0b\"" Mar 7 01:32:46.397747 kubelet[2317]: E0307 01:32:46.397714 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:46.405682 containerd[1558]: time="2026-03-07T01:32:46.405638461Z" level=info msg="CreateContainer within sandbox \"6c33108284f79020aa9530cd33d33eb8465393a4966a393cf3b750e0043d1b0b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:32:46.409584 containerd[1558]: time="2026-03-07T01:32:46.409562733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-171-132,Uid:813c7806336547103107a4975f54fb0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f43e2d967f15d9395d33b5b0d74d60f259c7d7e0f0319dc63cf839e7dc6468a\"" Mar 7 01:32:46.412119 kubelet[2317]: E0307 01:32:46.412077 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:46.417359 containerd[1558]: time="2026-03-07T01:32:46.417336936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-171-132,Uid:a5e6318ec347d66b2d7def3aaa65cb5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"20bf009754eb701dbf4d4d0a7cba7caf90ea313f0ea396c6c2c496cfeb0e74a6\"" Mar 7 01:32:46.420157 kubelet[2317]: E0307 01:32:46.420131 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:46.420329 containerd[1558]: time="2026-03-07T01:32:46.420304418Z" level=info msg="CreateContainer within sandbox \"7f43e2d967f15d9395d33b5b0d74d60f259c7d7e0f0319dc63cf839e7dc6468a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:32:46.423259 containerd[1558]: time="2026-03-07T01:32:46.423237499Z" level=info msg="CreateContainer within sandbox \"20bf009754eb701dbf4d4d0a7cba7caf90ea313f0ea396c6c2c496cfeb0e74a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:32:46.430875 containerd[1558]: time="2026-03-07T01:32:46.430841283Z" level=info msg="CreateContainer within sandbox \"6c33108284f79020aa9530cd33d33eb8465393a4966a393cf3b750e0043d1b0b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31a350641feb11de389c6eb6b708be07e5312dd8221aa0da16bbf5d1453f72d4\"" Mar 7 01:32:46.432144 containerd[1558]: time="2026-03-07T01:32:46.432118024Z" level=info msg="StartContainer for \"31a350641feb11de389c6eb6b708be07e5312dd8221aa0da16bbf5d1453f72d4\"" Mar 7 01:32:46.436611 containerd[1558]: time="2026-03-07T01:32:46.436588676Z" level=info msg="CreateContainer within sandbox \"20bf009754eb701dbf4d4d0a7cba7caf90ea313f0ea396c6c2c496cfeb0e74a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dfb1ba611bdb5ac86a118d2609653e4f6da83bbd1656bc4b267afbe2a734f362\"" Mar 7 01:32:46.437143 containerd[1558]: time="2026-03-07T01:32:46.437113586Z" level=info msg="StartContainer for \"dfb1ba611bdb5ac86a118d2609653e4f6da83bbd1656bc4b267afbe2a734f362\"" Mar 7 01:32:46.438823 containerd[1558]: time="2026-03-07T01:32:46.438802777Z" level=info msg="CreateContainer within sandbox \"7f43e2d967f15d9395d33b5b0d74d60f259c7d7e0f0319dc63cf839e7dc6468a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10ca5f6b9cc11f3c9b75db0b264e4223e0c93e7ce001cf549df4a6d67eb5e6fc\"" Mar 7 01:32:46.439275 containerd[1558]: time="2026-03-07T01:32:46.439237927Z" level=info msg="StartContainer for \"10ca5f6b9cc11f3c9b75db0b264e4223e0c93e7ce001cf549df4a6d67eb5e6fc\"" Mar 7 01:32:46.553041 kubelet[2317]: E0307 01:32:46.552844 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.171.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.171.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:32:46.555837 containerd[1558]: time="2026-03-07T01:32:46.555771656Z" level=info msg="StartContainer for \"dfb1ba611bdb5ac86a118d2609653e4f6da83bbd1656bc4b267afbe2a734f362\" returns successfully" Mar 7 01:32:46.587951 containerd[1558]: time="2026-03-07T01:32:46.586150981Z" level=info msg="StartContainer for \"31a350641feb11de389c6eb6b708be07e5312dd8221aa0da16bbf5d1453f72d4\" returns successfully" Mar 7 01:32:46.593956 containerd[1558]: time="2026-03-07T01:32:46.593394934Z" level=info msg="StartContainer for \"10ca5f6b9cc11f3c9b75db0b264e4223e0c93e7ce001cf549df4a6d67eb5e6fc\" returns successfully" Mar 7 01:32:46.630897 kubelet[2317]: E0307 01:32:46.630858 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.171.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-171-132?timeout=10s\": dial tcp 172.238.171.132:6443: connect: connection refused" interval="1.6s" Mar 7 01:32:46.789632 kubelet[2317]: I0307 01:32:46.788971 2317 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-132" Mar 7 01:32:47.288634 kubelet[2317]: E0307 01:32:47.288590 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:47.289076 kubelet[2317]: E0307 01:32:47.288775 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:47.291919 kubelet[2317]: E0307 01:32:47.290547 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:47.291919 kubelet[2317]: E0307 01:32:47.290693 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:47.296232 kubelet[2317]: E0307 01:32:47.296208 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:47.296368 kubelet[2317]: E0307 01:32:47.296346 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:48.127962 kubelet[2317]: I0307 01:32:48.126049 2317 kubelet_node_status.go:78] "Successfully registered node" node="172-238-171-132" Mar 7 01:32:48.127962 kubelet[2317]: E0307 01:32:48.126078 2317 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-238-171-132\": node \"172-238-171-132\" not found" Mar 7 01:32:48.187172 kubelet[2317]: E0307 01:32:48.187089 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.288042 kubelet[2317]: E0307 01:32:48.287983 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.296919 kubelet[2317]: E0307 01:32:48.296882 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:48.297272 kubelet[2317]: E0307 01:32:48.297014 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:48.297272 kubelet[2317]: E0307 01:32:48.297236 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:48.297338 kubelet[2317]: E0307 01:32:48.297309 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:48.297544 kubelet[2317]: E0307 01:32:48.297518 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-132\" not found" node="172-238-171-132" Mar 7 01:32:48.297621 kubelet[2317]: E0307 01:32:48.297607 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:48.388714 kubelet[2317]: E0307 01:32:48.388609 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.489235 kubelet[2317]: E0307 01:32:48.489183 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.590150 kubelet[2317]: E0307 01:32:48.590109 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.691001 kubelet[2317]: E0307 01:32:48.690872 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.791583 kubelet[2317]: E0307 01:32:48.791536 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.892511 kubelet[2317]: E0307 01:32:48.892440 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:48.993301 kubelet[2317]: E0307 01:32:48.993132 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-132\" not found" Mar 7 01:32:49.025558 kubelet[2317]: I0307 01:32:49.025512 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:49.056988 kubelet[2317]: I0307 01:32:49.056892 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:49.069682 kubelet[2317]: I0307 01:32:49.069472 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:49.203631 kubelet[2317]: I0307 01:32:49.203476 2317 apiserver.go:52] "Watching apiserver" Mar 7 01:32:49.226822 kubelet[2317]: I0307 01:32:49.226786 2317 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:32:49.300579 kubelet[2317]: I0307 01:32:49.300530 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:49.304953 kubelet[2317]: E0307 01:32:49.302938 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:49.305919 kubelet[2317]: E0307 01:32:49.305721 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:49.311731 kubelet[2317]: E0307 01:32:49.311606 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-171-132\" already exists" pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:49.312098 kubelet[2317]: E0307 01:32:49.311999 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:50.170101 systemd[1]: Reloading requested from client PID 2608 ('systemctl') (unit session-7.scope)... Mar 7 01:32:50.170424 systemd[1]: Reloading... Mar 7 01:32:50.252934 zram_generator::config[2644]: No configuration found. Mar 7 01:32:50.301333 kubelet[2317]: E0307 01:32:50.301303 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:50.385316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:32:50.460104 systemd[1]: Reloading finished in 289 ms. Mar 7 01:32:50.496431 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:32:50.515329 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:32:50.515723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:32:50.524339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:32:50.689254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:32:50.697583 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:32:50.740735 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:32:50.740735 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:32:50.740735 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:32:50.740735 kubelet[2709]: I0307 01:32:50.740536 2709 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:32:50.751573 kubelet[2709]: I0307 01:32:50.751539 2709 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:32:50.751573 kubelet[2709]: I0307 01:32:50.751564 2709 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:32:50.751804 kubelet[2709]: I0307 01:32:50.751784 2709 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:32:50.753498 kubelet[2709]: I0307 01:32:50.753466 2709 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:32:50.757098 kubelet[2709]: I0307 01:32:50.757073 2709 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:32:50.761354 kubelet[2709]: E0307 01:32:50.761308 2709 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:32:50.761354 kubelet[2709]: I0307 01:32:50.761343 2709 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:32:50.767775 kubelet[2709]: I0307 01:32:50.767659 2709 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:32:50.768328 kubelet[2709]: I0307 01:32:50.768287 2709 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:32:50.768719 kubelet[2709]: I0307 01:32:50.768329 2709 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-171-132","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:32:50.768806 kubelet[2709]: I0307 01:32:50.768751 2709 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:32:50.768806 kubelet[2709]: I0307 01:32:50.768762 2709 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:32:50.768845 kubelet[2709]: I0307 01:32:50.768810 2709 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:32:50.769046 kubelet[2709]: I0307 01:32:50.769013 2709 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:32:50.769046 kubelet[2709]: I0307 01:32:50.769034 2709 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:32:50.771629 kubelet[2709]: I0307 01:32:50.769992 2709 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:32:50.771629 kubelet[2709]: I0307 01:32:50.770018 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:32:50.775965 kubelet[2709]: I0307 01:32:50.775948 2709 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:32:50.776702 kubelet[2709]: I0307 01:32:50.776689 2709 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:32:50.779966 kubelet[2709]: I0307 01:32:50.779943 2709 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:32:50.780072 kubelet[2709]: I0307 01:32:50.780062 2709 server.go:1289] "Started kubelet" Mar 7 01:32:50.783743 kubelet[2709]: I0307 01:32:50.783729 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:32:50.788773 kubelet[2709]: I0307 01:32:50.788647 2709 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:32:50.789626 kubelet[2709]: I0307 01:32:50.789604 2709 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:32:50.792586 kubelet[2709]: I0307 01:32:50.792566 2709 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:32:50.793828 kubelet[2709]: I0307 01:32:50.793714 2709 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:32:50.794423 kubelet[2709]: I0307 01:32:50.794372 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:32:50.794489 kubelet[2709]: I0307 01:32:50.794402 2709 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:32:50.794633 kubelet[2709]: I0307 01:32:50.794611 2709 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:32:50.794877 kubelet[2709]: I0307 01:32:50.794836 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:32:50.795319 kubelet[2709]: E0307 01:32:50.795290 2709 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:32:50.802283 kubelet[2709]: I0307 01:32:50.802248 2709 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:32:50.804032 kubelet[2709]: I0307 01:32:50.804019 2709 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:32:50.804093 kubelet[2709]: I0307 01:32:50.804084 2709 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:32:50.804173 kubelet[2709]: I0307 01:32:50.804163 2709 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:32:50.804311 kubelet[2709]: I0307 01:32:50.804300 2709 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:32:50.804421 kubelet[2709]: E0307 01:32:50.804405 2709 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:32:50.810445 kubelet[2709]: I0307 01:32:50.810432 2709 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:32:50.810543 kubelet[2709]: I0307 01:32:50.810533 2709 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:32:50.810713 kubelet[2709]: I0307 01:32:50.810697 2709 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:32:50.882135 kubelet[2709]: I0307 01:32:50.882114 2709 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882237 2709 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882256 2709 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882367 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882376 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882391 2709 policy_none.go:49] "None policy: Start" Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882401 2709 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882410 2709 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:32:50.882931 kubelet[2709]: I0307 01:32:50.882487 2709 state_mem.go:75] "Updated machine memory state" Mar 7 01:32:50.884256 kubelet[2709]: E0307 01:32:50.884243 2709 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:32:50.885212 kubelet[2709]: I0307 01:32:50.885188 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:32:50.885244 kubelet[2709]: I0307 01:32:50.885214 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:32:50.887413 kubelet[2709]: I0307 01:32:50.887400 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:32:50.889196 kubelet[2709]: E0307 01:32:50.889181 2709 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:32:50.906924 kubelet[2709]: I0307 01:32:50.905375 2709 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:50.906924 kubelet[2709]: I0307 01:32:50.905863 2709 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:50.907516 kubelet[2709]: I0307 01:32:50.907503 2709 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:50.911968 kubelet[2709]: E0307 01:32:50.911949 2709 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-171-132\" already exists" pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:50.914419 kubelet[2709]: E0307 01:32:50.914211 2709 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-171-132\" already exists" pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:50.915399 kubelet[2709]: E0307 01:32:50.915386 2709 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-171-132\" already exists" pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:50.991189 kubelet[2709]: I0307 01:32:50.991005 2709 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-132" Mar 7 01:32:50.996276 kubelet[2709]: I0307 01:32:50.996248 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/813c7806336547103107a4975f54fb0f-kubeconfig\") pod \"kube-scheduler-172-238-171-132\" (UID: \"813c7806336547103107a4975f54fb0f\") " pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:50.996276 kubelet[2709]: I0307 01:32:50.996275 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/782a523cbb0f7eb9f3b36916df02eaf6-ca-certs\") pod \"kube-apiserver-172-238-171-132\" (UID: \"782a523cbb0f7eb9f3b36916df02eaf6\") " pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:50.996377 kubelet[2709]: I0307 01:32:50.996303 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/782a523cbb0f7eb9f3b36916df02eaf6-k8s-certs\") pod \"kube-apiserver-172-238-171-132\" (UID: \"782a523cbb0f7eb9f3b36916df02eaf6\") " pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:50.996377 kubelet[2709]: I0307 01:32:50.996318 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-ca-certs\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:50.996377 kubelet[2709]: I0307 01:32:50.996333 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-flexvolume-dir\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:50.996377 kubelet[2709]: I0307 01:32:50.996349 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/782a523cbb0f7eb9f3b36916df02eaf6-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-171-132\" (UID: \"782a523cbb0f7eb9f3b36916df02eaf6\") " pod="kube-system/kube-apiserver-172-238-171-132" Mar 7 01:32:50.996377 kubelet[2709]: I0307 01:32:50.996362 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-k8s-certs\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:50.996517 kubelet[2709]: I0307 01:32:50.996374 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-kubeconfig\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:50.996517 kubelet[2709]: I0307 01:32:50.996390 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5e6318ec347d66b2d7def3aaa65cb5b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-171-132\" (UID: \"a5e6318ec347d66b2d7def3aaa65cb5b\") " pod="kube-system/kube-controller-manager-172-238-171-132" Mar 7 01:32:51.004025 kubelet[2709]: I0307 01:32:51.003844 2709 kubelet_node_status.go:124] "Node was previously registered" node="172-238-171-132" Mar 7 01:32:51.004025 kubelet[2709]: I0307 01:32:51.003940 2709 kubelet_node_status.go:78] "Successfully registered node" node="172-238-171-132" Mar 7 01:32:51.212674 kubelet[2709]: E0307 01:32:51.212634 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:51.214827 kubelet[2709]: E0307 01:32:51.214750 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:51.215924 kubelet[2709]: E0307 01:32:51.215869 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:51.772722 kubelet[2709]: I0307 01:32:51.771802 2709 apiserver.go:52] "Watching apiserver" Mar 7 01:32:51.794987 kubelet[2709]: I0307 01:32:51.794938 2709 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:32:51.847901 kubelet[2709]: I0307 01:32:51.847872 2709 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:51.848357 kubelet[2709]: E0307 01:32:51.848342 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:51.848881 kubelet[2709]: E0307 01:32:51.848866 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:51.859221 kubelet[2709]: E0307 01:32:51.859194 2709 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-171-132\" already exists" pod="kube-system/kube-scheduler-172-238-171-132" Mar 7 01:32:51.859333 kubelet[2709]: E0307 01:32:51.859320 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:51.882943 kubelet[2709]: I0307 01:32:51.882045 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-171-132" podStartSLOduration=2.882031827 podStartE2EDuration="2.882031827s" podCreationTimestamp="2026-03-07 01:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:32:51.881953657 +0000 UTC m=+1.179266650" watchObservedRunningTime="2026-03-07 01:32:51.882031827 +0000 UTC m=+1.179344820" Mar 7 01:32:51.897844 kubelet[2709]: I0307 01:32:51.897797 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-171-132" podStartSLOduration=2.897785755 podStartE2EDuration="2.897785755s" podCreationTimestamp="2026-03-07 01:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:32:51.890525641 +0000 UTC m=+1.187838654" watchObservedRunningTime="2026-03-07 01:32:51.897785755 +0000 UTC m=+1.195098748" Mar 7 01:32:52.849700 kubelet[2709]: E0307 01:32:52.849670 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:52.850632 kubelet[2709]: E0307 01:32:52.850134 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:54.392549 kubelet[2709]: E0307 01:32:54.392501 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:55.648362 kubelet[2709]: I0307 01:32:55.648319 2709 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:32:55.654351 containerd[1558]: time="2026-03-07T01:32:55.651179770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:32:55.654779 kubelet[2709]: I0307 01:32:55.652000 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:32:55.718619 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:32:56.194379 kubelet[2709]: E0307 01:32:56.194351 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:56.207130 kubelet[2709]: I0307 01:32:56.206651 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-171-132" podStartSLOduration=7.206453448 podStartE2EDuration="7.206453448s" podCreationTimestamp="2026-03-07 01:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:32:51.898544955 +0000 UTC m=+1.195857948" watchObservedRunningTime="2026-03-07 01:32:56.206453448 +0000 UTC m=+5.503766441" Mar 7 01:32:56.634432 kubelet[2709]: I0307 01:32:56.634394 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e20126da-7b15-41d2-bddd-14b567ba2820-lib-modules\") pod \"kube-proxy-59b65\" (UID: \"e20126da-7b15-41d2-bddd-14b567ba2820\") " pod="kube-system/kube-proxy-59b65" Mar 7 01:32:56.634432 kubelet[2709]: I0307 01:32:56.634435 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j7s7\" (UniqueName: \"kubernetes.io/projected/e20126da-7b15-41d2-bddd-14b567ba2820-kube-api-access-2j7s7\") pod \"kube-proxy-59b65\" (UID: \"e20126da-7b15-41d2-bddd-14b567ba2820\") " pod="kube-system/kube-proxy-59b65" Mar 7 01:32:56.634432 kubelet[2709]: I0307 01:32:56.634455 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e20126da-7b15-41d2-bddd-14b567ba2820-kube-proxy\") pod \"kube-proxy-59b65\" (UID: \"e20126da-7b15-41d2-bddd-14b567ba2820\") " pod="kube-system/kube-proxy-59b65" Mar 7 01:32:56.634432 kubelet[2709]: I0307 01:32:56.634471 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e20126da-7b15-41d2-bddd-14b567ba2820-xtables-lock\") pod \"kube-proxy-59b65\" (UID: \"e20126da-7b15-41d2-bddd-14b567ba2820\") " pod="kube-system/kube-proxy-59b65" Mar 7 01:32:56.857198 kubelet[2709]: E0307 01:32:56.857146 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:56.877814 kubelet[2709]: E0307 01:32:56.877776 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:56.879374 containerd[1558]: time="2026-03-07T01:32:56.879077604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59b65,Uid:e20126da-7b15-41d2-bddd-14b567ba2820,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:56.905827 containerd[1558]: time="2026-03-07T01:32:56.904798797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:32:56.905921 containerd[1558]: time="2026-03-07T01:32:56.905659207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:32:56.905921 containerd[1558]: time="2026-03-07T01:32:56.905718577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:56.905921 containerd[1558]: time="2026-03-07T01:32:56.905836987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:56.936310 kubelet[2709]: I0307 01:32:56.936273 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/109e482b-ecad-48b2-bd29-b7ecc22c8d24-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-q8c8h\" (UID: \"109e482b-ecad-48b2-bd29-b7ecc22c8d24\") " pod="tigera-operator/tigera-operator-6bf85f8dd-q8c8h" Mar 7 01:32:56.936452 kubelet[2709]: I0307 01:32:56.936339 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59gmm\" (UniqueName: \"kubernetes.io/projected/109e482b-ecad-48b2-bd29-b7ecc22c8d24-kube-api-access-59gmm\") pod \"tigera-operator-6bf85f8dd-q8c8h\" (UID: \"109e482b-ecad-48b2-bd29-b7ecc22c8d24\") " pod="tigera-operator/tigera-operator-6bf85f8dd-q8c8h" Mar 7 01:32:56.959519 containerd[1558]: time="2026-03-07T01:32:56.959470364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59b65,Uid:e20126da-7b15-41d2-bddd-14b567ba2820,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb70de9f8ba7404254369a1d9f90758efdf0cd1c91527d87fa6208b76127b32e\"" Mar 7 01:32:56.960329 kubelet[2709]: E0307 01:32:56.960282 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:56.966515 containerd[1558]: time="2026-03-07T01:32:56.966342447Z" level=info msg="CreateContainer within sandbox \"cb70de9f8ba7404254369a1d9f90758efdf0cd1c91527d87fa6208b76127b32e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:32:56.979875 containerd[1558]: time="2026-03-07T01:32:56.979748014Z" level=info msg="CreateContainer within sandbox \"cb70de9f8ba7404254369a1d9f90758efdf0cd1c91527d87fa6208b76127b32e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15836a3d8fc37ccfca8494844927731e12adf49cab57df4e9fa42843de15641f\"" Mar 7 01:32:56.982664 containerd[1558]: time="2026-03-07T01:32:56.981168945Z" level=info msg="StartContainer for \"15836a3d8fc37ccfca8494844927731e12adf49cab57df4e9fa42843de15641f\"" Mar 7 01:32:57.058819 containerd[1558]: time="2026-03-07T01:32:57.058775903Z" level=info msg="StartContainer for \"15836a3d8fc37ccfca8494844927731e12adf49cab57df4e9fa42843de15641f\" returns successfully" Mar 7 01:32:57.145573 containerd[1558]: time="2026-03-07T01:32:57.145538377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-q8c8h,Uid:109e482b-ecad-48b2-bd29-b7ecc22c8d24,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:32:57.182795 containerd[1558]: time="2026-03-07T01:32:57.181757345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:32:57.182795 containerd[1558]: time="2026-03-07T01:32:57.181865405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:32:57.182795 containerd[1558]: time="2026-03-07T01:32:57.181884165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:57.182795 containerd[1558]: time="2026-03-07T01:32:57.182052825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:32:57.267216 containerd[1558]: time="2026-03-07T01:32:57.267089178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-q8c8h,Uid:109e482b-ecad-48b2-bd29-b7ecc22c8d24,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1a6fd8b23d523b396197ade7852cb3787b431eca385aeddfe2eca61c9109bedc\"" Mar 7 01:32:57.270743 containerd[1558]: time="2026-03-07T01:32:57.270713719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:32:57.861925 kubelet[2709]: E0307 01:32:57.861874 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:57.862941 kubelet[2709]: E0307 01:32:57.862652 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:32:57.875927 kubelet[2709]: I0307 01:32:57.875001 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59b65" podStartSLOduration=1.8749842110000001 podStartE2EDuration="1.874984211s" podCreationTimestamp="2026-03-07 01:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:32:57.874204061 +0000 UTC m=+7.171517064" watchObservedRunningTime="2026-03-07 01:32:57.874984211 +0000 UTC m=+7.172297214" Mar 7 01:32:58.129632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526646048.mount: Deactivated successfully. Mar 7 01:32:59.407113 containerd[1558]: time="2026-03-07T01:32:59.407075913Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:59.408012 containerd[1558]: time="2026-03-07T01:32:59.407870551Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:32:59.409279 containerd[1558]: time="2026-03-07T01:32:59.408532580Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:59.410221 containerd[1558]: time="2026-03-07T01:32:59.410199155Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:59.410880 containerd[1558]: time="2026-03-07T01:32:59.410853584Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.140101535s" Mar 7 01:32:59.410936 containerd[1558]: time="2026-03-07T01:32:59.410882504Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:32:59.413349 containerd[1558]: time="2026-03-07T01:32:59.413326277Z" level=info msg="CreateContainer within sandbox \"1a6fd8b23d523b396197ade7852cb3787b431eca385aeddfe2eca61c9109bedc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:32:59.424215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704949146.mount: Deactivated successfully. Mar 7 01:32:59.424952 containerd[1558]: time="2026-03-07T01:32:59.424880619Z" level=info msg="CreateContainer within sandbox \"1a6fd8b23d523b396197ade7852cb3787b431eca385aeddfe2eca61c9109bedc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a68c9c525d7c48e05cb72863e8445b5fc3c754a63589fc95bd629ff56450b461\"" Mar 7 01:32:59.426450 containerd[1558]: time="2026-03-07T01:32:59.426224894Z" level=info msg="StartContainer for \"a68c9c525d7c48e05cb72863e8445b5fc3c754a63589fc95bd629ff56450b461\"" Mar 7 01:32:59.487626 containerd[1558]: time="2026-03-07T01:32:59.487554021Z" level=info msg="StartContainer for \"a68c9c525d7c48e05cb72863e8445b5fc3c754a63589fc95bd629ff56450b461\" returns successfully" Mar 7 01:33:01.115779 kubelet[2709]: E0307 01:33:01.114805 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:01.307756 kubelet[2709]: I0307 01:33:01.307407 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-q8c8h" podStartSLOduration=3.16425257 podStartE2EDuration="5.307394254s" podCreationTimestamp="2026-03-07 01:32:56 +0000 UTC" firstStartedPulling="2026-03-07 01:32:57.268389228 +0000 UTC m=+6.565702221" lastFinishedPulling="2026-03-07 01:32:59.411530912 +0000 UTC m=+8.708843905" observedRunningTime="2026-03-07 01:32:59.876303718 +0000 UTC m=+9.173616711" watchObservedRunningTime="2026-03-07 01:33:01.307394254 +0000 UTC m=+10.604707247" Mar 7 01:33:01.877638 kubelet[2709]: E0307 01:33:01.875455 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:04.403675 kubelet[2709]: E0307 01:33:04.403458 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:04.883954 kubelet[2709]: E0307 01:33:04.883768 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:05.598734 kubelet[2709]: I0307 01:33:05.598416 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56fc5a14-ee92-4be6-b996-62ff65e8533f-tigera-ca-bundle\") pod \"calico-typha-ffb56f7c8-l6hxl\" (UID: \"56fc5a14-ee92-4be6-b996-62ff65e8533f\") " pod="calico-system/calico-typha-ffb56f7c8-l6hxl" Mar 7 01:33:05.598734 kubelet[2709]: I0307 01:33:05.598455 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4j4\" (UniqueName: \"kubernetes.io/projected/56fc5a14-ee92-4be6-b996-62ff65e8533f-kube-api-access-zk4j4\") pod \"calico-typha-ffb56f7c8-l6hxl\" (UID: \"56fc5a14-ee92-4be6-b996-62ff65e8533f\") " pod="calico-system/calico-typha-ffb56f7c8-l6hxl" Mar 7 01:33:05.598734 kubelet[2709]: I0307 01:33:05.598474 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56fc5a14-ee92-4be6-b996-62ff65e8533f-typha-certs\") pod \"calico-typha-ffb56f7c8-l6hxl\" (UID: \"56fc5a14-ee92-4be6-b996-62ff65e8533f\") " pod="calico-system/calico-typha-ffb56f7c8-l6hxl" Mar 7 01:33:05.665262 sudo[1814]: pam_unix(sudo:session): session closed for user root Mar 7 01:33:05.696872 sshd[1810]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:05.702005 kubelet[2709]: I0307 01:33:05.699801 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-net-dir\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702005 kubelet[2709]: I0307 01:33:05.699834 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-flexvol-driver-host\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702005 kubelet[2709]: I0307 01:33:05.699863 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-run-calico\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702005 kubelet[2709]: I0307 01:33:05.699878 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-xtables-lock\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702005 kubelet[2709]: I0307 01:33:05.699933 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-nodeproc\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702174 kubelet[2709]: I0307 01:33:05.699948 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-lib-calico\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702174 kubelet[2709]: I0307 01:33:05.699962 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/950033ed-d8e0-41bd-bd2f-73e016c04f0e-node-certs\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702174 kubelet[2709]: I0307 01:33:05.699975 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-sys-fs\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702174 kubelet[2709]: I0307 01:33:05.699989 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-log-dir\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702174 kubelet[2709]: I0307 01:33:05.700004 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-bin-dir\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702174 kubelet[2709]: I0307 01:33:05.700337 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-bpffs\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702282 kubelet[2709]: I0307 01:33:05.700365 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-lib-modules\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702282 kubelet[2709]: I0307 01:33:05.700381 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjnt2\" (UniqueName: \"kubernetes.io/projected/950033ed-d8e0-41bd-bd2f-73e016c04f0e-kube-api-access-vjnt2\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702282 kubelet[2709]: I0307 01:33:05.700405 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-policysync\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.702282 kubelet[2709]: I0307 01:33:05.700419 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/950033ed-d8e0-41bd-bd2f-73e016c04f0e-tigera-ca-bundle\") pod \"calico-node-8kpg9\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " pod="calico-system/calico-node-8kpg9" Mar 7 01:33:05.707924 systemd[1]: sshd@6-172.238.171.132:22-68.220.241.50:54300.service: Deactivated successfully. Mar 7 01:33:05.746020 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:33:05.749144 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:33:05.755239 systemd-logind[1538]: Removed session 7. Mar 7 01:33:05.786006 kubelet[2709]: E0307 01:33:05.785965 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tt5kk" podUID="ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c" Mar 7 01:33:05.801937 kubelet[2709]: I0307 01:33:05.801865 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c-registration-dir\") pod \"csi-node-driver-tt5kk\" (UID: \"ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c\") " pod="calico-system/csi-node-driver-tt5kk" Mar 7 01:33:05.804215 kubelet[2709]: E0307 01:33:05.804118 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.804215 kubelet[2709]: W0307 01:33:05.804137 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.804215 kubelet[2709]: E0307 01:33:05.804154 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.804869 kubelet[2709]: E0307 01:33:05.804454 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.804869 kubelet[2709]: W0307 01:33:05.804465 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.804869 kubelet[2709]: E0307 01:33:05.804474 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.804869 kubelet[2709]: I0307 01:33:05.804560 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c-socket-dir\") pod \"csi-node-driver-tt5kk\" (UID: \"ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c\") " pod="calico-system/csi-node-driver-tt5kk" Mar 7 01:33:05.804869 kubelet[2709]: E0307 01:33:05.804747 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.804869 kubelet[2709]: W0307 01:33:05.804754 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.804869 kubelet[2709]: E0307 01:33:05.804763 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.805631 kubelet[2709]: E0307 01:33:05.805029 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.805631 kubelet[2709]: W0307 01:33:05.805046 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.805631 kubelet[2709]: E0307 01:33:05.805055 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.805631 kubelet[2709]: E0307 01:33:05.805293 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.805631 kubelet[2709]: W0307 01:33:05.805301 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.805631 kubelet[2709]: E0307 01:33:05.805309 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.805769 kubelet[2709]: E0307 01:33:05.805748 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.805769 kubelet[2709]: W0307 01:33:05.805756 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.805769 kubelet[2709]: E0307 01:33:05.805765 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.807304 kubelet[2709]: E0307 01:33:05.806077 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.807304 kubelet[2709]: W0307 01:33:05.806100 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.807304 kubelet[2709]: E0307 01:33:05.806109 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.807304 kubelet[2709]: E0307 01:33:05.806348 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.807304 kubelet[2709]: W0307 01:33:05.806389 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.807304 kubelet[2709]: E0307 01:33:05.806398 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.808114 kubelet[2709]: E0307 01:33:05.808079 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.808114 kubelet[2709]: W0307 01:33:05.808094 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.808114 kubelet[2709]: E0307 01:33:05.808104 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.809602 kubelet[2709]: E0307 01:33:05.808340 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.809602 kubelet[2709]: W0307 01:33:05.808353 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.809602 kubelet[2709]: E0307 01:33:05.808361 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.809602 kubelet[2709]: E0307 01:33:05.808615 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.809602 kubelet[2709]: W0307 01:33:05.808625 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.809602 kubelet[2709]: E0307 01:33:05.808635 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.809602 kubelet[2709]: E0307 01:33:05.808999 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.809602 kubelet[2709]: W0307 01:33:05.809010 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.809602 kubelet[2709]: E0307 01:33:05.809021 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.809602 kubelet[2709]: E0307 01:33:05.809311 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.809811 kubelet[2709]: W0307 01:33:05.809322 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.809811 kubelet[2709]: E0307 01:33:05.809333 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.809811 kubelet[2709]: E0307 01:33:05.809796 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.809811 kubelet[2709]: W0307 01:33:05.809804 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.809811 kubelet[2709]: E0307 01:33:05.809812 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.810095 kubelet[2709]: E0307 01:33:05.810077 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.810095 kubelet[2709]: W0307 01:33:05.810091 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.810166 kubelet[2709]: E0307 01:33:05.810103 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.810429 kubelet[2709]: E0307 01:33:05.810322 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.810429 kubelet[2709]: W0307 01:33:05.810331 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.810429 kubelet[2709]: E0307 01:33:05.810339 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.812937 kubelet[2709]: E0307 01:33:05.812480 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.812937 kubelet[2709]: W0307 01:33:05.812490 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.812937 kubelet[2709]: E0307 01:33:05.812499 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.812937 kubelet[2709]: E0307 01:33:05.812725 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.812937 kubelet[2709]: W0307 01:33:05.812733 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.812937 kubelet[2709]: E0307 01:33:05.812740 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.813074 kubelet[2709]: E0307 01:33:05.813026 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.813074 kubelet[2709]: W0307 01:33:05.813034 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.813074 kubelet[2709]: E0307 01:33:05.813042 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.813642 kubelet[2709]: E0307 01:33:05.813277 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.813642 kubelet[2709]: W0307 01:33:05.813286 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.813642 kubelet[2709]: E0307 01:33:05.813295 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.813642 kubelet[2709]: E0307 01:33:05.813549 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.813642 kubelet[2709]: W0307 01:33:05.813557 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.813642 kubelet[2709]: E0307 01:33:05.813565 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.816506 kubelet[2709]: E0307 01:33:05.816206 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.816506 kubelet[2709]: W0307 01:33:05.816219 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.816506 kubelet[2709]: E0307 01:33:05.816229 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.816506 kubelet[2709]: I0307 01:33:05.816248 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c-kubelet-dir\") pod \"csi-node-driver-tt5kk\" (UID: \"ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c\") " pod="calico-system/csi-node-driver-tt5kk" Mar 7 01:33:05.816506 kubelet[2709]: E0307 01:33:05.816477 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.816506 kubelet[2709]: W0307 01:33:05.816486 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.816506 kubelet[2709]: E0307 01:33:05.816494 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.817175 kubelet[2709]: E0307 01:33:05.816684 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.817175 kubelet[2709]: W0307 01:33:05.816694 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.817175 kubelet[2709]: E0307 01:33:05.816702 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.817175 kubelet[2709]: E0307 01:33:05.816890 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.817175 kubelet[2709]: W0307 01:33:05.816897 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.817175 kubelet[2709]: E0307 01:33:05.816968 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.818069 kubelet[2709]: E0307 01:33:05.817389 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.818069 kubelet[2709]: W0307 01:33:05.817400 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.818069 kubelet[2709]: E0307 01:33:05.817409 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.821153 kubelet[2709]: E0307 01:33:05.821134 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.821153 kubelet[2709]: W0307 01:33:05.821150 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.821153 kubelet[2709]: E0307 01:33:05.821176 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.821153 kubelet[2709]: I0307 01:33:05.821195 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c-varrun\") pod \"csi-node-driver-tt5kk\" (UID: \"ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c\") " pod="calico-system/csi-node-driver-tt5kk" Mar 7 01:33:05.828413 kubelet[2709]: E0307 01:33:05.821950 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.828413 kubelet[2709]: W0307 01:33:05.821963 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.828413 kubelet[2709]: E0307 01:33:05.821972 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.828413 kubelet[2709]: E0307 01:33:05.823963 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.828413 kubelet[2709]: W0307 01:33:05.823974 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.828413 kubelet[2709]: E0307 01:33:05.823985 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.828413 kubelet[2709]: E0307 01:33:05.825381 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:05.828413 kubelet[2709]: E0307 01:33:05.826626 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.828413 kubelet[2709]: W0307 01:33:05.826637 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.828413 kubelet[2709]: E0307 01:33:05.826650 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.828878 containerd[1558]: time="2026-03-07T01:33:05.825777569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ffb56f7c8-l6hxl,Uid:56fc5a14-ee92-4be6-b996-62ff65e8533f,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:05.841822 kubelet[2709]: E0307 01:33:05.832713 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.841822 kubelet[2709]: W0307 01:33:05.832723 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.841822 kubelet[2709]: E0307 01:33:05.832734 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.841822 kubelet[2709]: E0307 01:33:05.833551 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.841822 kubelet[2709]: W0307 01:33:05.833564 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.841822 kubelet[2709]: E0307 01:33:05.833599 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.841822 kubelet[2709]: E0307 01:33:05.834347 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.841822 kubelet[2709]: W0307 01:33:05.834357 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.841822 kubelet[2709]: E0307 01:33:05.834367 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.841822 kubelet[2709]: E0307 01:33:05.835751 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.842223 kubelet[2709]: W0307 01:33:05.835761 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.842223 kubelet[2709]: E0307 01:33:05.835772 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.842223 kubelet[2709]: E0307 01:33:05.839568 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.842223 kubelet[2709]: W0307 01:33:05.839641 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.842223 kubelet[2709]: E0307 01:33:05.839657 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.844271 kubelet[2709]: E0307 01:33:05.844207 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.844271 kubelet[2709]: W0307 01:33:05.844223 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.845176 kubelet[2709]: E0307 01:33:05.844289 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.855995 kubelet[2709]: E0307 01:33:05.849605 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.855995 kubelet[2709]: W0307 01:33:05.849623 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.855995 kubelet[2709]: E0307 01:33:05.849636 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.862963 kubelet[2709]: E0307 01:33:05.861026 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.862963 kubelet[2709]: W0307 01:33:05.862507 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.862963 kubelet[2709]: E0307 01:33:05.862666 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.866082 kubelet[2709]: E0307 01:33:05.865393 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.866082 kubelet[2709]: W0307 01:33:05.865415 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.866082 kubelet[2709]: E0307 01:33:05.865434 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.871017 kubelet[2709]: E0307 01:33:05.870106 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.871017 kubelet[2709]: W0307 01:33:05.870135 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.871017 kubelet[2709]: E0307 01:33:05.870155 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.872349 kubelet[2709]: I0307 01:33:05.871536 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhk7t\" (UniqueName: \"kubernetes.io/projected/ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c-kube-api-access-dhk7t\") pod \"csi-node-driver-tt5kk\" (UID: \"ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c\") " pod="calico-system/csi-node-driver-tt5kk" Mar 7 01:33:05.877012 kubelet[2709]: E0307 01:33:05.876762 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.877012 kubelet[2709]: W0307 01:33:05.876787 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.877012 kubelet[2709]: E0307 01:33:05.876809 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.879931 kubelet[2709]: E0307 01:33:05.879129 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.879931 kubelet[2709]: W0307 01:33:05.879185 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.879931 kubelet[2709]: E0307 01:33:05.879200 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.882164 kubelet[2709]: E0307 01:33:05.882148 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.882164 kubelet[2709]: W0307 01:33:05.882162 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.882242 kubelet[2709]: E0307 01:33:05.882177 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.885528 kubelet[2709]: E0307 01:33:05.885509 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.885573 kubelet[2709]: W0307 01:33:05.885525 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.885573 kubelet[2709]: E0307 01:33:05.885550 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.888241 kubelet[2709]: E0307 01:33:05.888109 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.888241 kubelet[2709]: W0307 01:33:05.888120 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.888940 kubelet[2709]: E0307 01:33:05.888375 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.889396 kubelet[2709]: E0307 01:33:05.889348 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.889396 kubelet[2709]: W0307 01:33:05.889360 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.889457 kubelet[2709]: E0307 01:33:05.889382 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.892213 kubelet[2709]: E0307 01:33:05.891973 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.892213 kubelet[2709]: W0307 01:33:05.891988 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.892213 kubelet[2709]: E0307 01:33:05.892124 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.894323 kubelet[2709]: E0307 01:33:05.894027 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.894323 kubelet[2709]: W0307 01:33:05.894041 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.894323 kubelet[2709]: E0307 01:33:05.894053 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.898718 kubelet[2709]: E0307 01:33:05.897760 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.898718 kubelet[2709]: W0307 01:33:05.897773 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.898718 kubelet[2709]: E0307 01:33:05.897785 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.899933 kubelet[2709]: E0307 01:33:05.899576 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.899933 kubelet[2709]: W0307 01:33:05.899589 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.899933 kubelet[2709]: E0307 01:33:05.899600 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.900981 kubelet[2709]: E0307 01:33:05.900962 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.900981 kubelet[2709]: W0307 01:33:05.900978 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.901676 kubelet[2709]: E0307 01:33:05.901092 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.905083 kubelet[2709]: E0307 01:33:05.905047 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.905083 kubelet[2709]: W0307 01:33:05.905083 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.905146 kubelet[2709]: E0307 01:33:05.905094 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.905883 kubelet[2709]: E0307 01:33:05.905360 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.905883 kubelet[2709]: W0307 01:33:05.905370 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.905883 kubelet[2709]: E0307 01:33:05.905379 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.905883 kubelet[2709]: E0307 01:33:05.905757 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.905883 kubelet[2709]: W0307 01:33:05.905765 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.905883 kubelet[2709]: E0307 01:33:05.905774 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.908931 kubelet[2709]: E0307 01:33:05.908241 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.908931 kubelet[2709]: W0307 01:33:05.908253 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.908931 kubelet[2709]: E0307 01:33:05.908264 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.912449 kubelet[2709]: E0307 01:33:05.910997 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.912449 kubelet[2709]: W0307 01:33:05.911011 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.912449 kubelet[2709]: E0307 01:33:05.911022 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.912449 kubelet[2709]: E0307 01:33:05.911896 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.912449 kubelet[2709]: W0307 01:33:05.911977 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.912449 kubelet[2709]: E0307 01:33:05.911988 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.913823 kubelet[2709]: E0307 01:33:05.912980 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.913823 kubelet[2709]: W0307 01:33:05.912991 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.913823 kubelet[2709]: E0307 01:33:05.913000 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.913823 kubelet[2709]: E0307 01:33:05.913665 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.913823 kubelet[2709]: W0307 01:33:05.913675 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.913823 kubelet[2709]: E0307 01:33:05.913695 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.918150 kubelet[2709]: E0307 01:33:05.918085 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.918150 kubelet[2709]: W0307 01:33:05.918104 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.918150 kubelet[2709]: E0307 01:33:05.918119 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.920963 kubelet[2709]: E0307 01:33:05.919540 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.920963 kubelet[2709]: W0307 01:33:05.919553 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.920963 kubelet[2709]: E0307 01:33:05.919565 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.920963 kubelet[2709]: E0307 01:33:05.920188 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.920963 kubelet[2709]: W0307 01:33:05.920197 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.920963 kubelet[2709]: E0307 01:33:05.920207 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.920963 kubelet[2709]: E0307 01:33:05.920479 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.920963 kubelet[2709]: W0307 01:33:05.920488 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.920963 kubelet[2709]: E0307 01:33:05.920496 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.920963 kubelet[2709]: E0307 01:33:05.920744 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.921208 kubelet[2709]: W0307 01:33:05.920752 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.921208 kubelet[2709]: E0307 01:33:05.920760 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.921208 kubelet[2709]: E0307 01:33:05.921062 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.921208 kubelet[2709]: W0307 01:33:05.921070 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.921208 kubelet[2709]: E0307 01:33:05.921079 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.923070 kubelet[2709]: E0307 01:33:05.921744 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.923070 kubelet[2709]: W0307 01:33:05.921755 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.923070 kubelet[2709]: E0307 01:33:05.921764 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.923070 kubelet[2709]: E0307 01:33:05.922081 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.923070 kubelet[2709]: W0307 01:33:05.922090 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.923070 kubelet[2709]: E0307 01:33:05.922098 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.924578 kubelet[2709]: E0307 01:33:05.923777 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.924578 kubelet[2709]: W0307 01:33:05.923788 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.924578 kubelet[2709]: E0307 01:33:05.923797 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.942465 kubelet[2709]: E0307 01:33:05.936041 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.942465 kubelet[2709]: W0307 01:33:05.936071 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.942465 kubelet[2709]: E0307 01:33:05.936089 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.944241 containerd[1558]: time="2026-03-07T01:33:05.944027117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:05.944241 containerd[1558]: time="2026-03-07T01:33:05.944069987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:05.944241 containerd[1558]: time="2026-03-07T01:33:05.944080437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:05.944241 containerd[1558]: time="2026-03-07T01:33:05.944168217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:05.973695 containerd[1558]: time="2026-03-07T01:33:05.972373653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kpg9,Uid:950033ed-d8e0-41bd-bd2f-73e016c04f0e,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:05.993764 kubelet[2709]: E0307 01:33:05.992957 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.993764 kubelet[2709]: W0307 01:33:05.992982 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.994150 kubelet[2709]: E0307 01:33:05.994122 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.995606 kubelet[2709]: E0307 01:33:05.994736 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.995606 kubelet[2709]: W0307 01:33:05.994750 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.995606 kubelet[2709]: E0307 01:33:05.994789 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.995606 kubelet[2709]: E0307 01:33:05.995467 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.995606 kubelet[2709]: W0307 01:33:05.995478 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.995606 kubelet[2709]: E0307 01:33:05.995502 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.998185 kubelet[2709]: E0307 01:33:05.997205 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:05.998185 kubelet[2709]: W0307 01:33:05.997217 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:05.998185 kubelet[2709]: E0307 01:33:05.997232 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:05.999956 kubelet[2709]: E0307 01:33:05.999924 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.000570 kubelet[2709]: W0307 01:33:06.000445 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.001114 kubelet[2709]: E0307 01:33:06.001047 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.002543 kubelet[2709]: E0307 01:33:06.002151 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.002543 kubelet[2709]: W0307 01:33:06.002164 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.002543 kubelet[2709]: E0307 01:33:06.002177 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.004938 kubelet[2709]: E0307 01:33:06.004398 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.004938 kubelet[2709]: W0307 01:33:06.004644 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.004938 kubelet[2709]: E0307 01:33:06.004815 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.006942 kubelet[2709]: E0307 01:33:06.005959 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.006942 kubelet[2709]: W0307 01:33:06.005971 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.006942 kubelet[2709]: E0307 01:33:06.005983 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.006942 kubelet[2709]: E0307 01:33:06.006703 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.006942 kubelet[2709]: W0307 01:33:06.006712 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.006942 kubelet[2709]: E0307 01:33:06.006721 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.007793 kubelet[2709]: E0307 01:33:06.007766 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.008408 kubelet[2709]: W0307 01:33:06.007783 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.008408 kubelet[2709]: E0307 01:33:06.007897 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.009459 kubelet[2709]: E0307 01:33:06.009439 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.009459 kubelet[2709]: W0307 01:33:06.009454 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.009532 kubelet[2709]: E0307 01:33:06.009465 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.011767 kubelet[2709]: E0307 01:33:06.011220 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.011767 kubelet[2709]: W0307 01:33:06.011233 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.011767 kubelet[2709]: E0307 01:33:06.011243 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.012076 kubelet[2709]: E0307 01:33:06.012029 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.012076 kubelet[2709]: W0307 01:33:06.012038 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.012076 kubelet[2709]: E0307 01:33:06.012048 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.013680 kubelet[2709]: E0307 01:33:06.013552 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.014457 kubelet[2709]: W0307 01:33:06.014008 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.014457 kubelet[2709]: E0307 01:33:06.014026 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.016141 kubelet[2709]: E0307 01:33:06.016128 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.016274 kubelet[2709]: W0307 01:33:06.016188 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.016274 kubelet[2709]: E0307 01:33:06.016253 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.018729 kubelet[2709]: E0307 01:33:06.018710 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.018729 kubelet[2709]: W0307 01:33:06.018726 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.018817 kubelet[2709]: E0307 01:33:06.018737 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.019767 kubelet[2709]: E0307 01:33:06.019703 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.019767 kubelet[2709]: W0307 01:33:06.019716 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.019767 kubelet[2709]: E0307 01:33:06.019726 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.022178 kubelet[2709]: E0307 01:33:06.021814 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.022178 kubelet[2709]: W0307 01:33:06.021827 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.022178 kubelet[2709]: E0307 01:33:06.021836 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.023578 kubelet[2709]: E0307 01:33:06.023108 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.023578 kubelet[2709]: W0307 01:33:06.023120 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.023578 kubelet[2709]: E0307 01:33:06.023129 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.026151 kubelet[2709]: E0307 01:33:06.025106 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.026151 kubelet[2709]: W0307 01:33:06.025117 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.026151 kubelet[2709]: E0307 01:33:06.025128 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.030808 kubelet[2709]: E0307 01:33:06.030783 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.030808 kubelet[2709]: W0307 01:33:06.030809 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.030890 kubelet[2709]: E0307 01:33:06.030829 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.031182 kubelet[2709]: E0307 01:33:06.031168 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.032001 kubelet[2709]: W0307 01:33:06.031230 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.032001 kubelet[2709]: E0307 01:33:06.031292 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.032792 kubelet[2709]: E0307 01:33:06.032779 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.033136 kubelet[2709]: W0307 01:33:06.032848 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.033136 kubelet[2709]: E0307 01:33:06.032863 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.036140 kubelet[2709]: E0307 01:33:06.036006 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.036140 kubelet[2709]: W0307 01:33:06.036017 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.036140 kubelet[2709]: E0307 01:33:06.036028 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.036955 kubelet[2709]: E0307 01:33:06.036934 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.036955 kubelet[2709]: W0307 01:33:06.036950 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.037025 kubelet[2709]: E0307 01:33:06.036962 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.043771 containerd[1558]: time="2026-03-07T01:33:06.041463283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:06.043771 containerd[1558]: time="2026-03-07T01:33:06.041514433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:06.043771 containerd[1558]: time="2026-03-07T01:33:06.041529452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:06.043771 containerd[1558]: time="2026-03-07T01:33:06.041616792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:06.057025 kubelet[2709]: E0307 01:33:06.056993 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:33:06.057025 kubelet[2709]: W0307 01:33:06.057016 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:33:06.057129 kubelet[2709]: E0307 01:33:06.057035 2709 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:33:06.151126 containerd[1558]: time="2026-03-07T01:33:06.151032427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8kpg9,Uid:950033ed-d8e0-41bd-bd2f-73e016c04f0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\"" Mar 7 01:33:06.155478 containerd[1558]: time="2026-03-07T01:33:06.154382023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:33:06.163352 containerd[1558]: time="2026-03-07T01:33:06.162491741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ffb56f7c8-l6hxl,Uid:56fc5a14-ee92-4be6-b996-62ff65e8533f,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd1fa3f4e51a749a0e739df72089bf4853268fc8da2d5d4abda257d44376dedb\"" Mar 7 01:33:06.163416 kubelet[2709]: E0307 01:33:06.163289 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:07.008599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894956247.mount: Deactivated successfully. Mar 7 01:33:07.085006 containerd[1558]: time="2026-03-07T01:33:07.084895729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:07.085647 containerd[1558]: time="2026-03-07T01:33:07.085597078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 7 01:33:07.086111 containerd[1558]: time="2026-03-07T01:33:07.086088348Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:07.088434 containerd[1558]: time="2026-03-07T01:33:07.088371655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:07.089080 containerd[1558]: time="2026-03-07T01:33:07.088883264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 934.473011ms" Mar 7 01:33:07.089080 containerd[1558]: time="2026-03-07T01:33:07.088928894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:33:07.092774 containerd[1558]: time="2026-03-07T01:33:07.092746849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:33:07.094361 containerd[1558]: time="2026-03-07T01:33:07.094318597Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:33:07.102697 containerd[1558]: time="2026-03-07T01:33:07.102660826Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\"" Mar 7 01:33:07.105330 containerd[1558]: time="2026-03-07T01:33:07.103558694Z" level=info msg="StartContainer for \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\"" Mar 7 01:33:07.181405 containerd[1558]: time="2026-03-07T01:33:07.181218524Z" level=info msg="StartContainer for \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\" returns successfully" Mar 7 01:33:07.273961 containerd[1558]: time="2026-03-07T01:33:07.273817764Z" level=info msg="shim disconnected" id=e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677 namespace=k8s.io Mar 7 01:33:07.274337 containerd[1558]: time="2026-03-07T01:33:07.274301034Z" level=warning msg="cleaning up after shim disconnected" id=e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677 namespace=k8s.io Mar 7 01:33:07.274411 containerd[1558]: time="2026-03-07T01:33:07.274396994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:07.726698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677-rootfs.mount: Deactivated successfully. Mar 7 01:33:07.805423 kubelet[2709]: E0307 01:33:07.805359 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tt5kk" podUID="ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c" Mar 7 01:33:08.430842 containerd[1558]: time="2026-03-07T01:33:08.430756598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:08.432152 containerd[1558]: time="2026-03-07T01:33:08.432098956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 7 01:33:08.433257 containerd[1558]: time="2026-03-07T01:33:08.432683255Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:08.438153 containerd[1558]: time="2026-03-07T01:33:08.438117358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:08.439583 containerd[1558]: time="2026-03-07T01:33:08.439372037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.346532188s" Mar 7 01:33:08.439667 containerd[1558]: time="2026-03-07T01:33:08.439651817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:33:08.442573 containerd[1558]: time="2026-03-07T01:33:08.442365513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:33:08.456449 containerd[1558]: time="2026-03-07T01:33:08.456389077Z" level=info msg="CreateContainer within sandbox \"bd1fa3f4e51a749a0e739df72089bf4853268fc8da2d5d4abda257d44376dedb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:33:08.481820 containerd[1558]: time="2026-03-07T01:33:08.481759437Z" level=info msg="CreateContainer within sandbox \"bd1fa3f4e51a749a0e739df72089bf4853268fc8da2d5d4abda257d44376dedb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"abab875d6fd85a1c11cbd36f7f97aff9d02b95bcef0cbb4d4f98dc99bc1416e1\"" Mar 7 01:33:08.482771 containerd[1558]: time="2026-03-07T01:33:08.482718685Z" level=info msg="StartContainer for \"abab875d6fd85a1c11cbd36f7f97aff9d02b95bcef0cbb4d4f98dc99bc1416e1\"" Mar 7 01:33:08.572937 containerd[1558]: time="2026-03-07T01:33:08.571271112Z" level=info msg="StartContainer for \"abab875d6fd85a1c11cbd36f7f97aff9d02b95bcef0cbb4d4f98dc99bc1416e1\" returns successfully" Mar 7 01:33:08.903715 kubelet[2709]: E0307 01:33:08.903684 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:08.914860 kubelet[2709]: I0307 01:33:08.914801 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ffb56f7c8-l6hxl" podStartSLOduration=1.63710176 podStartE2EDuration="3.914786245s" podCreationTimestamp="2026-03-07 01:33:05 +0000 UTC" firstStartedPulling="2026-03-07 01:33:06.164588788 +0000 UTC m=+15.461901781" lastFinishedPulling="2026-03-07 01:33:08.442273273 +0000 UTC m=+17.739586266" observedRunningTime="2026-03-07 01:33:08.913717137 +0000 UTC m=+18.211030130" watchObservedRunningTime="2026-03-07 01:33:08.914786245 +0000 UTC m=+18.212099238" Mar 7 01:33:09.805230 kubelet[2709]: E0307 01:33:09.804691 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tt5kk" podUID="ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c" Mar 7 01:33:09.905218 kubelet[2709]: I0307 01:33:09.905186 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:09.905632 kubelet[2709]: E0307 01:33:09.905542 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:10.056519 update_engine[1543]: I20260307 01:33:10.055287 1543 update_attempter.cc:509] Updating boot flags... Mar 7 01:33:10.125948 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3429) Mar 7 01:33:10.226956 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3433) Mar 7 01:33:11.806388 kubelet[2709]: E0307 01:33:11.806079 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tt5kk" podUID="ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c" Mar 7 01:33:12.726217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362705329.mount: Deactivated successfully. Mar 7 01:33:12.752580 containerd[1558]: time="2026-03-07T01:33:12.752518637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:12.754170 containerd[1558]: time="2026-03-07T01:33:12.754128236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:33:12.755051 containerd[1558]: time="2026-03-07T01:33:12.754955745Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:12.758536 containerd[1558]: time="2026-03-07T01:33:12.758491962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:12.760929 containerd[1558]: time="2026-03-07T01:33:12.760879230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.318228947s" Mar 7 01:33:12.761084 containerd[1558]: time="2026-03-07T01:33:12.761030160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:33:12.766215 containerd[1558]: time="2026-03-07T01:33:12.766185296Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:33:12.788401 containerd[1558]: time="2026-03-07T01:33:12.788370268Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\"" Mar 7 01:33:12.789995 containerd[1558]: time="2026-03-07T01:33:12.789975397Z" level=info msg="StartContainer for \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\"" Mar 7 01:33:12.847249 containerd[1558]: time="2026-03-07T01:33:12.847184631Z" level=info msg="StartContainer for \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\" returns successfully" Mar 7 01:33:12.983275 containerd[1558]: time="2026-03-07T01:33:12.983019173Z" level=info msg="shim disconnected" id=75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6 namespace=k8s.io Mar 7 01:33:12.983275 containerd[1558]: time="2026-03-07T01:33:12.983065683Z" level=warning msg="cleaning up after shim disconnected" id=75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6 namespace=k8s.io Mar 7 01:33:12.983275 containerd[1558]: time="2026-03-07T01:33:12.983074583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:13.723048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6-rootfs.mount: Deactivated successfully. Mar 7 01:33:13.805469 kubelet[2709]: E0307 01:33:13.805428 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tt5kk" podUID="ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c" Mar 7 01:33:13.921184 containerd[1558]: time="2026-03-07T01:33:13.921121689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:33:15.805616 kubelet[2709]: E0307 01:33:15.805235 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tt5kk" podUID="ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c" Mar 7 01:33:15.813175 containerd[1558]: time="2026-03-07T01:33:15.813141497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:15.814002 containerd[1558]: time="2026-03-07T01:33:15.813957997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:33:15.814749 containerd[1558]: time="2026-03-07T01:33:15.814712746Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:15.817775 containerd[1558]: time="2026-03-07T01:33:15.816873345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:15.817775 containerd[1558]: time="2026-03-07T01:33:15.817675974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.896520935s" Mar 7 01:33:15.817775 containerd[1558]: time="2026-03-07T01:33:15.817699434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:33:15.821663 containerd[1558]: time="2026-03-07T01:33:15.821640452Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:33:15.841146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360154118.mount: Deactivated successfully. Mar 7 01:33:15.845975 containerd[1558]: time="2026-03-07T01:33:15.845945049Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\"" Mar 7 01:33:15.846736 containerd[1558]: time="2026-03-07T01:33:15.846703518Z" level=info msg="StartContainer for \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\"" Mar 7 01:33:15.951663 containerd[1558]: time="2026-03-07T01:33:15.951565828Z" level=info msg="StartContainer for \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\" returns successfully" Mar 7 01:33:16.475282 containerd[1558]: time="2026-03-07T01:33:16.475181162Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:33:16.505687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9-rootfs.mount: Deactivated successfully. Mar 7 01:33:16.506706 containerd[1558]: time="2026-03-07T01:33:16.506530596Z" level=info msg="shim disconnected" id=ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9 namespace=k8s.io Mar 7 01:33:16.506706 containerd[1558]: time="2026-03-07T01:33:16.506579176Z" level=warning msg="cleaning up after shim disconnected" id=ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9 namespace=k8s.io Mar 7 01:33:16.506706 containerd[1558]: time="2026-03-07T01:33:16.506587976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:16.522847 kubelet[2709]: I0307 01:33:16.522371 2709 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:33:16.593722 kubelet[2709]: I0307 01:33:16.593693 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b74ff1db-535d-4184-94d2-59a40d15c8c9-calico-apiserver-certs\") pod \"calico-apiserver-5966b74f89-mgnnz\" (UID: \"b74ff1db-535d-4184-94d2-59a40d15c8c9\") " pod="calico-system/calico-apiserver-5966b74f89-mgnnz" Mar 7 01:33:16.593931 kubelet[2709]: I0307 01:33:16.593859 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-ca-bundle\") pod \"whisker-6d785d65b8-24w74\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " pod="calico-system/whisker-6d785d65b8-24w74" Mar 7 01:33:16.593931 kubelet[2709]: I0307 01:33:16.593880 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff997689-ed72-4f2b-ad6a-78b32cbaabf3-tigera-ca-bundle\") pod \"calico-kube-controllers-666c98579-qxmzh\" (UID: \"ff997689-ed72-4f2b-ad6a-78b32cbaabf3\") " pod="calico-system/calico-kube-controllers-666c98579-qxmzh" Mar 7 01:33:16.594103 kubelet[2709]: I0307 01:33:16.593899 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe0fc753-6131-4c82-a147-0fb13afc44d9-config-volume\") pod \"coredns-674b8bbfcf-lm86t\" (UID: \"fe0fc753-6131-4c82-a147-0fb13afc44d9\") " pod="kube-system/coredns-674b8bbfcf-lm86t" Mar 7 01:33:16.594103 kubelet[2709]: I0307 01:33:16.594035 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9j5w\" (UniqueName: \"kubernetes.io/projected/ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba-kube-api-access-s9j5w\") pod \"coredns-674b8bbfcf-vdbpr\" (UID: \"ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba\") " pod="kube-system/coredns-674b8bbfcf-vdbpr" Mar 7 01:33:16.594103 kubelet[2709]: I0307 01:33:16.594051 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-nginx-config\") pod \"whisker-6d785d65b8-24w74\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " pod="calico-system/whisker-6d785d65b8-24w74" Mar 7 01:33:16.594103 kubelet[2709]: I0307 01:33:16.594065 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqsrw\" (UniqueName: \"kubernetes.io/projected/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-kube-api-access-kqsrw\") pod \"whisker-6d785d65b8-24w74\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " pod="calico-system/whisker-6d785d65b8-24w74" Mar 7 01:33:16.594289 kubelet[2709]: I0307 01:33:16.594229 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-backend-key-pair\") pod \"whisker-6d785d65b8-24w74\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " pod="calico-system/whisker-6d785d65b8-24w74" Mar 7 01:33:16.594289 kubelet[2709]: I0307 01:33:16.594249 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zdw\" (UniqueName: \"kubernetes.io/projected/ff997689-ed72-4f2b-ad6a-78b32cbaabf3-kube-api-access-r9zdw\") pod \"calico-kube-controllers-666c98579-qxmzh\" (UID: \"ff997689-ed72-4f2b-ad6a-78b32cbaabf3\") " pod="calico-system/calico-kube-controllers-666c98579-qxmzh" Mar 7 01:33:16.594289 kubelet[2709]: I0307 01:33:16.594266 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ng6\" (UniqueName: \"kubernetes.io/projected/fe0fc753-6131-4c82-a147-0fb13afc44d9-kube-api-access-v5ng6\") pod \"coredns-674b8bbfcf-lm86t\" (UID: \"fe0fc753-6131-4c82-a147-0fb13afc44d9\") " pod="kube-system/coredns-674b8bbfcf-lm86t" Mar 7 01:33:16.594449 kubelet[2709]: I0307 01:33:16.594384 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ps8q\" (UniqueName: \"kubernetes.io/projected/b74ff1db-535d-4184-94d2-59a40d15c8c9-kube-api-access-8ps8q\") pod \"calico-apiserver-5966b74f89-mgnnz\" (UID: \"b74ff1db-535d-4184-94d2-59a40d15c8c9\") " pod="calico-system/calico-apiserver-5966b74f89-mgnnz" Mar 7 01:33:16.594449 kubelet[2709]: I0307 01:33:16.594410 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba-config-volume\") pod \"coredns-674b8bbfcf-vdbpr\" (UID: \"ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba\") " pod="kube-system/coredns-674b8bbfcf-vdbpr" Mar 7 01:33:16.695276 kubelet[2709]: I0307 01:33:16.695160 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hscjh\" (UniqueName: \"kubernetes.io/projected/82ecbaaa-38d2-47ca-8766-21b7ed9556a7-kube-api-access-hscjh\") pod \"goldmane-5b85766d88-dxh9d\" (UID: \"82ecbaaa-38d2-47ca-8766-21b7ed9556a7\") " pod="calico-system/goldmane-5b85766d88-dxh9d" Mar 7 01:33:16.698103 kubelet[2709]: I0307 01:33:16.695994 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ecbaaa-38d2-47ca-8766-21b7ed9556a7-config\") pod \"goldmane-5b85766d88-dxh9d\" (UID: \"82ecbaaa-38d2-47ca-8766-21b7ed9556a7\") " pod="calico-system/goldmane-5b85766d88-dxh9d" Mar 7 01:33:16.698103 kubelet[2709]: I0307 01:33:16.696069 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82ecbaaa-38d2-47ca-8766-21b7ed9556a7-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-dxh9d\" (UID: \"82ecbaaa-38d2-47ca-8766-21b7ed9556a7\") " pod="calico-system/goldmane-5b85766d88-dxh9d" Mar 7 01:33:16.698103 kubelet[2709]: I0307 01:33:16.696124 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/82ecbaaa-38d2-47ca-8766-21b7ed9556a7-goldmane-key-pair\") pod \"goldmane-5b85766d88-dxh9d\" (UID: \"82ecbaaa-38d2-47ca-8766-21b7ed9556a7\") " pod="calico-system/goldmane-5b85766d88-dxh9d" Mar 7 01:33:16.698103 kubelet[2709]: I0307 01:33:16.696181 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8x5l\" (UniqueName: \"kubernetes.io/projected/a05ad463-2b61-48be-ab34-432b9b18b36f-kube-api-access-g8x5l\") pod \"calico-apiserver-5966b74f89-6sjb8\" (UID: \"a05ad463-2b61-48be-ab34-432b9b18b36f\") " pod="calico-system/calico-apiserver-5966b74f89-6sjb8" Mar 7 01:33:16.698103 kubelet[2709]: I0307 01:33:16.696212 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a05ad463-2b61-48be-ab34-432b9b18b36f-calico-apiserver-certs\") pod \"calico-apiserver-5966b74f89-6sjb8\" (UID: \"a05ad463-2b61-48be-ab34-432b9b18b36f\") " pod="calico-system/calico-apiserver-5966b74f89-6sjb8" Mar 7 01:33:16.860926 kubelet[2709]: E0307 01:33:16.860888 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:16.861987 containerd[1558]: time="2026-03-07T01:33:16.861578088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lm86t,Uid:fe0fc753-6131-4c82-a147-0fb13afc44d9,Namespace:kube-system,Attempt:0,}" Mar 7 01:33:16.863162 containerd[1558]: time="2026-03-07T01:33:16.862584688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d785d65b8-24w74,Uid:3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:16.871674 containerd[1558]: time="2026-03-07T01:33:16.868950564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c98579-qxmzh,Uid:ff997689-ed72-4f2b-ad6a-78b32cbaabf3,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:16.880515 containerd[1558]: time="2026-03-07T01:33:16.880488538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-mgnnz,Uid:b74ff1db-535d-4184-94d2-59a40d15c8c9,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:16.881469 containerd[1558]: time="2026-03-07T01:33:16.881449378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-dxh9d,Uid:82ecbaaa-38d2-47ca-8766-21b7ed9556a7,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:16.893192 containerd[1558]: time="2026-03-07T01:33:16.893134412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-6sjb8,Uid:a05ad463-2b61-48be-ab34-432b9b18b36f,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:16.895811 kubelet[2709]: E0307 01:33:16.895429 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:16.896519 containerd[1558]: time="2026-03-07T01:33:16.895980020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdbpr,Uid:ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba,Namespace:kube-system,Attempt:0,}" Mar 7 01:33:16.979419 containerd[1558]: time="2026-03-07T01:33:16.979365488Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:33:17.023844 containerd[1558]: time="2026-03-07T01:33:17.023804047Z" level=info msg="CreateContainer within sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\"" Mar 7 01:33:17.025921 containerd[1558]: time="2026-03-07T01:33:17.025768186Z" level=info msg="StartContainer for \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\"" Mar 7 01:33:17.073686 containerd[1558]: time="2026-03-07T01:33:17.073036606Z" level=error msg="Failed to destroy network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.073686 containerd[1558]: time="2026-03-07T01:33:17.073489056Z" level=error msg="encountered an error cleaning up failed sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.073686 containerd[1558]: time="2026-03-07T01:33:17.073527566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-mgnnz,Uid:b74ff1db-535d-4184-94d2-59a40d15c8c9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.074982 containerd[1558]: time="2026-03-07T01:33:17.074352275Z" level=error msg="Failed to destroy network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.074982 containerd[1558]: time="2026-03-07T01:33:17.074843445Z" level=error msg="encountered an error cleaning up failed sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.074982 containerd[1558]: time="2026-03-07T01:33:17.074873735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c98579-qxmzh,Uid:ff997689-ed72-4f2b-ad6a-78b32cbaabf3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.075797 kubelet[2709]: E0307 01:33:17.075217 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.075797 kubelet[2709]: E0307 01:33:17.075278 2709 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666c98579-qxmzh" Mar 7 01:33:17.075797 kubelet[2709]: E0307 01:33:17.075301 2709 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666c98579-qxmzh" Mar 7 01:33:17.075922 kubelet[2709]: E0307 01:33:17.075344 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-666c98579-qxmzh_calico-system(ff997689-ed72-4f2b-ad6a-78b32cbaabf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-666c98579-qxmzh_calico-system(ff997689-ed72-4f2b-ad6a-78b32cbaabf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-666c98579-qxmzh" podUID="ff997689-ed72-4f2b-ad6a-78b32cbaabf3" Mar 7 01:33:17.077088 kubelet[2709]: E0307 01:33:17.076940 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.077088 kubelet[2709]: E0307 01:33:17.076969 2709 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5966b74f89-mgnnz" Mar 7 01:33:17.077088 kubelet[2709]: E0307 01:33:17.077006 2709 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5966b74f89-mgnnz" Mar 7 01:33:17.077183 kubelet[2709]: E0307 01:33:17.077038 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5966b74f89-mgnnz_calico-system(b74ff1db-535d-4184-94d2-59a40d15c8c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5966b74f89-mgnnz_calico-system(b74ff1db-535d-4184-94d2-59a40d15c8c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5966b74f89-mgnnz" podUID="b74ff1db-535d-4184-94d2-59a40d15c8c9" Mar 7 01:33:17.200285 containerd[1558]: time="2026-03-07T01:33:17.199999230Z" level=error msg="Failed to destroy network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.203672 containerd[1558]: time="2026-03-07T01:33:17.203641078Z" level=error msg="encountered an error cleaning up failed sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.203821 containerd[1558]: time="2026-03-07T01:33:17.203798318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d785d65b8-24w74,Uid:3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.204094 kubelet[2709]: E0307 01:33:17.204064 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.204243 kubelet[2709]: E0307 01:33:17.204215 2709 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d785d65b8-24w74" Mar 7 01:33:17.204352 kubelet[2709]: E0307 01:33:17.204338 2709 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d785d65b8-24w74" Mar 7 01:33:17.206004 kubelet[2709]: E0307 01:33:17.204981 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d785d65b8-24w74_calico-system(3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d785d65b8-24w74_calico-system(3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d785d65b8-24w74" podUID="3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca" Mar 7 01:33:17.212036 containerd[1558]: time="2026-03-07T01:33:17.212007954Z" level=error msg="Failed to destroy network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.212479 containerd[1558]: time="2026-03-07T01:33:17.212455964Z" level=error msg="encountered an error cleaning up failed sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.212579 containerd[1558]: time="2026-03-07T01:33:17.212558524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lm86t,Uid:fe0fc753-6131-4c82-a147-0fb13afc44d9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.212936 kubelet[2709]: E0307 01:33:17.212744 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.212936 kubelet[2709]: E0307 01:33:17.212784 2709 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lm86t" Mar 7 01:33:17.212936 kubelet[2709]: E0307 01:33:17.212803 2709 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lm86t" Mar 7 01:33:17.213033 kubelet[2709]: E0307 01:33:17.212840 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lm86t_kube-system(fe0fc753-6131-4c82-a147-0fb13afc44d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lm86t_kube-system(fe0fc753-6131-4c82-a147-0fb13afc44d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lm86t" podUID="fe0fc753-6131-4c82-a147-0fb13afc44d9" Mar 7 01:33:17.214839 containerd[1558]: time="2026-03-07T01:33:17.214094753Z" level=info msg="StartContainer for \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\" returns successfully" Mar 7 01:33:17.229193 containerd[1558]: time="2026-03-07T01:33:17.229155297Z" level=error msg="Failed to destroy network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.229933 containerd[1558]: time="2026-03-07T01:33:17.229824407Z" level=error msg="encountered an error cleaning up failed sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.229933 containerd[1558]: time="2026-03-07T01:33:17.229869947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdbpr,Uid:ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.230532 kubelet[2709]: E0307 01:33:17.230239 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.230532 kubelet[2709]: E0307 01:33:17.230315 2709 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vdbpr" Mar 7 01:33:17.230532 kubelet[2709]: E0307 01:33:17.230338 2709 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vdbpr" Mar 7 01:33:17.230649 kubelet[2709]: E0307 01:33:17.230414 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vdbpr_kube-system(ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vdbpr_kube-system(ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vdbpr" podUID="ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba" Mar 7 01:33:17.242077 containerd[1558]: time="2026-03-07T01:33:17.241872061Z" level=error msg="Failed to destroy network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.242473 containerd[1558]: time="2026-03-07T01:33:17.242390001Z" level=error msg="encountered an error cleaning up failed sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.242473 containerd[1558]: time="2026-03-07T01:33:17.242435971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-dxh9d,Uid:82ecbaaa-38d2-47ca-8766-21b7ed9556a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.245513 kubelet[2709]: E0307 01:33:17.243964 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.245513 kubelet[2709]: E0307 01:33:17.244029 2709 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-dxh9d" Mar 7 01:33:17.245513 kubelet[2709]: E0307 01:33:17.244049 2709 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-dxh9d" Mar 7 01:33:17.245657 kubelet[2709]: E0307 01:33:17.244115 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-dxh9d_calico-system(82ecbaaa-38d2-47ca-8766-21b7ed9556a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-dxh9d_calico-system(82ecbaaa-38d2-47ca-8766-21b7ed9556a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-dxh9d" podUID="82ecbaaa-38d2-47ca-8766-21b7ed9556a7" Mar 7 01:33:17.249370 containerd[1558]: time="2026-03-07T01:33:17.249336048Z" level=error msg="Failed to destroy network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.249924 containerd[1558]: time="2026-03-07T01:33:17.249877968Z" level=error msg="encountered an error cleaning up failed sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.250608 containerd[1558]: time="2026-03-07T01:33:17.250576007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-6sjb8,Uid:a05ad463-2b61-48be-ab34-432b9b18b36f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.252157 kubelet[2709]: E0307 01:33:17.250719 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:33:17.252157 kubelet[2709]: E0307 01:33:17.250753 2709 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5966b74f89-6sjb8" Mar 7 01:33:17.252157 kubelet[2709]: E0307 01:33:17.250772 2709 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5966b74f89-6sjb8" Mar 7 01:33:17.252283 kubelet[2709]: E0307 01:33:17.250810 2709 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5966b74f89-6sjb8_calico-system(a05ad463-2b61-48be-ab34-432b9b18b36f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5966b74f89-6sjb8_calico-system(a05ad463-2b61-48be-ab34-432b9b18b36f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5966b74f89-6sjb8" podUID="a05ad463-2b61-48be-ab34-432b9b18b36f" Mar 7 01:33:17.809430 containerd[1558]: time="2026-03-07T01:33:17.809388432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tt5kk,Uid:ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:17.839169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80-shm.mount: Deactivated successfully. Mar 7 01:33:17.839405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199-shm.mount: Deactivated successfully. Mar 7 01:33:17.839583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932-shm.mount: Deactivated successfully. Mar 7 01:33:17.950410 kubelet[2709]: I0307 01:33:17.950372 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:17.951174 containerd[1558]: time="2026-03-07T01:33:17.951149128Z" level=info msg="StopPodSandbox for \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\"" Mar 7 01:33:17.953648 containerd[1558]: time="2026-03-07T01:33:17.952959338Z" level=info msg="Ensure that sandbox c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff in task-service has been cleanup successfully" Mar 7 01:33:17.953726 kubelet[2709]: I0307 01:33:17.953542 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:17.955677 containerd[1558]: time="2026-03-07T01:33:17.955652607Z" level=info msg="StopPodSandbox for \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\"" Mar 7 01:33:17.958466 containerd[1558]: time="2026-03-07T01:33:17.958443946Z" level=info msg="Ensure that sandbox 8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b in task-service has been cleanup successfully" Mar 7 01:33:17.961663 kubelet[2709]: I0307 01:33:17.961382 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:17.963360 containerd[1558]: time="2026-03-07T01:33:17.963335924Z" level=info msg="StopPodSandbox for \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\"" Mar 7 01:33:17.963981 containerd[1558]: time="2026-03-07T01:33:17.963877603Z" level=info msg="Ensure that sandbox 80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80 in task-service has been cleanup successfully" Mar 7 01:33:17.968364 kubelet[2709]: I0307 01:33:17.968210 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:17.969409 containerd[1558]: time="2026-03-07T01:33:17.968857751Z" level=info msg="StopPodSandbox for \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\"" Mar 7 01:33:17.971510 containerd[1558]: time="2026-03-07T01:33:17.971484670Z" level=info msg="Ensure that sandbox 0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199 in task-service has been cleanup successfully" Mar 7 01:33:17.972508 kubelet[2709]: I0307 01:33:17.972487 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:17.973758 containerd[1558]: time="2026-03-07T01:33:17.973740759Z" level=info msg="StopPodSandbox for \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\"" Mar 7 01:33:17.975247 containerd[1558]: time="2026-03-07T01:33:17.975225208Z" level=info msg="Ensure that sandbox b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932 in task-service has been cleanup successfully" Mar 7 01:33:18.011881 systemd-networkd[1233]: cali6827f7f2762: Link UP Mar 7 01:33:18.013573 systemd-networkd[1233]: cali6827f7f2762: Gained carrier Mar 7 01:33:18.025654 kubelet[2709]: I0307 01:33:18.018320 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:18.028624 containerd[1558]: time="2026-03-07T01:33:18.028596926Z" level=info msg="StopPodSandbox for \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\"" Mar 7 01:33:18.034217 kubelet[2709]: I0307 01:33:18.033287 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:18.035182 containerd[1558]: time="2026-03-07T01:33:18.034840064Z" level=info msg="Ensure that sandbox 525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7 in task-service has been cleanup successfully" Mar 7 01:33:18.035926 containerd[1558]: time="2026-03-07T01:33:18.035834144Z" level=info msg="StopPodSandbox for \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\"" Mar 7 01:33:18.037280 kubelet[2709]: I0307 01:33:18.037178 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8kpg9" podStartSLOduration=3.372921102 podStartE2EDuration="13.037166753s" podCreationTimestamp="2026-03-07 01:33:05 +0000 UTC" firstStartedPulling="2026-03-07 01:33:06.154147723 +0000 UTC m=+15.451460716" lastFinishedPulling="2026-03-07 01:33:15.818393364 +0000 UTC m=+25.115706367" observedRunningTime="2026-03-07 01:33:18.036748983 +0000 UTC m=+27.334061976" watchObservedRunningTime="2026-03-07 01:33:18.037166753 +0000 UTC m=+27.334479746" Mar 7 01:33:18.039076 containerd[1558]: time="2026-03-07T01:33:18.038995192Z" level=info msg="Ensure that sandbox a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a in task-service has been cleanup successfully" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.857 [ERROR][3821] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.880 [INFO][3821] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-csi--node--driver--tt5kk-eth0 csi-node-driver- calico-system ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c 602 0 2026-03-07 01:33:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-238-171-132 csi-node-driver-tt5kk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6827f7f2762 [] [] }} ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.880 [INFO][3821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.919 [INFO][3832] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" HandleID="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Workload="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.927 [INFO][3832] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" HandleID="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Workload="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277930), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-132", "pod":"csi-node-driver-tt5kk", "timestamp":"2026-03-07 01:33:17.919687283 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000253080)} Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.927 [INFO][3832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.927 [INFO][3832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.927 [INFO][3832] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.931 [INFO][3832] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.936 [INFO][3832] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.941 [INFO][3832] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.942 [INFO][3832] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.945 [INFO][3832] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.945 [INFO][3832] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.947 [INFO][3832] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449 Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.956 [INFO][3832] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.969 [INFO][3832] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.65/26] block=192.168.121.64/26 handle="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.971 [INFO][3832] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.65/26] handle="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" host="172-238-171-132" Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.972 [INFO][3832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:18.076348 containerd[1558]: 2026-03-07 01:33:17.973 [INFO][3832] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.65/26] IPv6=[] ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" HandleID="k8s-pod-network.244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Workload="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" Mar 7 01:33:18.076822 containerd[1558]: 2026-03-07 01:33:17.995 [INFO][3821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-csi--node--driver--tt5kk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"csi-node-driver-tt5kk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6827f7f2762", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:18.076822 containerd[1558]: 2026-03-07 01:33:17.995 [INFO][3821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.65/32] ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" Mar 7 01:33:18.076822 containerd[1558]: 2026-03-07 01:33:17.995 [INFO][3821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6827f7f2762 ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" Mar 7 01:33:18.076822 containerd[1558]: 2026-03-07 01:33:18.016 [INFO][3821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" Mar 7 01:33:18.076822 containerd[1558]: 2026-03-07 01:33:18.041 [INFO][3821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-csi--node--driver--tt5kk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449", Pod:"csi-node-driver-tt5kk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6827f7f2762", MAC:"fe:77:7b:d1:fc:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:18.076822 containerd[1558]: 2026-03-07 01:33:18.058 [INFO][3821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449" Namespace="calico-system" Pod="csi-node-driver-tt5kk" WorkloadEndpoint="172--238--171--132-k8s-csi--node--driver--tt5kk-eth0" Mar 7 01:33:18.153396 containerd[1558]: time="2026-03-07T01:33:18.151973239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:18.153396 containerd[1558]: time="2026-03-07T01:33:18.152026979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:18.153396 containerd[1558]: time="2026-03-07T01:33:18.152038489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:18.158457 systemd[1]: run-containerd-runc-k8s.io-6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9-runc.XBAAZF.mount: Deactivated successfully. Mar 7 01:33:18.167077 containerd[1558]: time="2026-03-07T01:33:18.162954445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.130 [INFO][3880] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.131 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" iface="eth0" netns="/var/run/netns/cni-a1310df3-898a-7b50-1c03-826602bf327b" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.131 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" iface="eth0" netns="/var/run/netns/cni-a1310df3-898a-7b50-1c03-826602bf327b" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.131 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" iface="eth0" netns="/var/run/netns/cni-a1310df3-898a-7b50-1c03-826602bf327b" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.131 [INFO][3880] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.131 [INFO][3880] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.213 [INFO][3967] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.213 [INFO][3967] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.213 [INFO][3967] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.252 [WARNING][3967] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.253 [INFO][3967] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.276 [INFO][3967] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:18.331700 containerd[1558]: 2026-03-07 01:33:18.308 [INFO][3880] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:18.333061 containerd[1558]: time="2026-03-07T01:33:18.331798991Z" level=info msg="TearDown network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\" successfully" Mar 7 01:33:18.333061 containerd[1558]: time="2026-03-07T01:33:18.331823251Z" level=info msg="StopPodSandbox for \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\" returns successfully" Mar 7 01:33:18.336124 containerd[1558]: time="2026-03-07T01:33:18.336090989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-dxh9d,Uid:82ecbaaa-38d2-47ca-8766-21b7ed9556a7,Namespace:calico-system,Attempt:1,}" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.238 [INFO][3878] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.239 [INFO][3878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" iface="eth0" netns="/var/run/netns/cni-54cb7e6a-325a-7583-2dbe-a24240b152ef" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.239 [INFO][3878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" iface="eth0" netns="/var/run/netns/cni-54cb7e6a-325a-7583-2dbe-a24240b152ef" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.239 [INFO][3878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" iface="eth0" netns="/var/run/netns/cni-54cb7e6a-325a-7583-2dbe-a24240b152ef" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.239 [INFO][3878] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.239 [INFO][3878] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.318 [INFO][3995] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.318 [INFO][3995] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.318 [INFO][3995] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.325 [WARNING][3995] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.325 [INFO][3995] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.329 [INFO][3995] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:18.353626 containerd[1558]: 2026-03-07 01:33:18.348 [INFO][3878] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:18.353626 containerd[1558]: time="2026-03-07T01:33:18.353107252Z" level=info msg="TearDown network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\" successfully" Mar 7 01:33:18.353626 containerd[1558]: time="2026-03-07T01:33:18.353143002Z" level=info msg="StopPodSandbox for \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\" returns successfully" Mar 7 01:33:18.354627 containerd[1558]: time="2026-03-07T01:33:18.353838023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c98579-qxmzh,Uid:ff997689-ed72-4f2b-ad6a-78b32cbaabf3,Namespace:calico-system,Attempt:1,}" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.308 [INFO][3873] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.311 [INFO][3873] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" iface="eth0" netns="/var/run/netns/cni-684c2ae6-d140-9dfb-aa87-07f31395fb8c" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.313 [INFO][3873] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" iface="eth0" netns="/var/run/netns/cni-684c2ae6-d140-9dfb-aa87-07f31395fb8c" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.313 [INFO][3873] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" iface="eth0" netns="/var/run/netns/cni-684c2ae6-d140-9dfb-aa87-07f31395fb8c" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.314 [INFO][3873] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.314 [INFO][3873] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.396 [INFO][4019] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.397 [INFO][4019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.397 [INFO][4019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.414 [WARNING][4019] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.414 [INFO][4019] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.418 [INFO][4019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:18.473656 containerd[1558]: 2026-03-07 01:33:18.446 [INFO][3873] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:18.473656 containerd[1558]: time="2026-03-07T01:33:18.473495776Z" level=info msg="TearDown network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\" successfully" Mar 7 01:33:18.473656 containerd[1558]: time="2026-03-07T01:33:18.473527596Z" level=info msg="StopPodSandbox for \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\" returns successfully" Mar 7 01:33:18.474297 kubelet[2709]: E0307 01:33:18.474166 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:18.477465 containerd[1558]: time="2026-03-07T01:33:18.475807966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdbpr,Uid:ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba,Namespace:kube-system,Attempt:1,}" Mar 7 01:33:18.570243 containerd[1558]: time="2026-03-07T01:33:18.570188230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tt5kk,Uid:ffd24aa2-3847-4c2e-a195-1abdfb2f1e4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449\"" Mar 7 01:33:18.580003 containerd[1558]: time="2026-03-07T01:33:18.579976206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.314 [INFO][3901] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.314 [INFO][3901] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" iface="eth0" netns="/var/run/netns/cni-6fa5fcbf-e946-8350-d42c-1a374143b659" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.314 [INFO][3901] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" iface="eth0" netns="/var/run/netns/cni-6fa5fcbf-e946-8350-d42c-1a374143b659" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.314 [INFO][3901] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" iface="eth0" netns="/var/run/netns/cni-6fa5fcbf-e946-8350-d42c-1a374143b659" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.314 [INFO][3901] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.315 [INFO][3901] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.533 [INFO][4017] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.533 [INFO][4017] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.533 [INFO][4017] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.569 [WARNING][4017] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.569 [INFO][4017] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.579 [INFO][4017] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:18.627019 containerd[1558]: 2026-03-07 01:33:18.600 [INFO][3901] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:18.627019 containerd[1558]: time="2026-03-07T01:33:18.624370249Z" level=info msg="TearDown network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\" successfully" Mar 7 01:33:18.627019 containerd[1558]: time="2026-03-07T01:33:18.624407489Z" level=info msg="StopPodSandbox for \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\" returns successfully" Mar 7 01:33:18.718114 kubelet[2709]: I0307 01:33:18.717223 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-backend-key-pair\") pod \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " Mar 7 01:33:18.718114 kubelet[2709]: I0307 01:33:18.717268 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqsrw\" (UniqueName: \"kubernetes.io/projected/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-kube-api-access-kqsrw\") pod \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " Mar 7 01:33:18.718114 kubelet[2709]: I0307 01:33:18.717295 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-nginx-config\") pod \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " Mar 7 01:33:18.718114 kubelet[2709]: I0307 01:33:18.717313 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-ca-bundle\") pod \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\" (UID: \"3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca\") " Mar 7 01:33:18.718114 kubelet[2709]: I0307 01:33:18.717777 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca" (UID: "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:33:18.723113 kubelet[2709]: I0307 01:33:18.721970 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca" (UID: "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:33:18.741795 kubelet[2709]: I0307 01:33:18.741585 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-kube-api-access-kqsrw" (OuterVolumeSpecName: "kube-api-access-kqsrw") pod "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca" (UID: "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca"). InnerVolumeSpecName "kube-api-access-kqsrw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:33:18.742321 kubelet[2709]: I0307 01:33:18.742288 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca" (UID: "3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.435 [INFO][3936] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.435 [INFO][3936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" iface="eth0" netns="/var/run/netns/cni-3519e2b9-dad1-536d-bf91-04076e4c0c89" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.437 [INFO][3936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" iface="eth0" netns="/var/run/netns/cni-3519e2b9-dad1-536d-bf91-04076e4c0c89" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.437 [INFO][3936] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" iface="eth0" netns="/var/run/netns/cni-3519e2b9-dad1-536d-bf91-04076e4c0c89" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.437 [INFO][3936] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.437 [INFO][3936] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.709 [INFO][4064] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.709 [INFO][4064] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.710 [INFO][4064] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.749 [WARNING][4064] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.749 [INFO][4064] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.750 [INFO][4064] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:18.764637 containerd[1558]: 2026-03-07 01:33:18.755 [INFO][3936] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:18.769656 containerd[1558]: time="2026-03-07T01:33:18.767687954Z" level=info msg="TearDown network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\" successfully" Mar 7 01:33:18.769656 containerd[1558]: time="2026-03-07T01:33:18.767728384Z" level=info msg="StopPodSandbox for \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\" returns successfully" Mar 7 01:33:18.769656 containerd[1558]: time="2026-03-07T01:33:18.769167424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-mgnnz,Uid:b74ff1db-535d-4184-94d2-59a40d15c8c9,Namespace:calico-system,Attempt:1,}" Mar 7 01:33:18.819154 kubelet[2709]: I0307 01:33:18.819113 2709 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-nginx-config\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:18.822770 kubelet[2709]: I0307 01:33:18.822748 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-ca-bundle\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:18.832071 kubelet[2709]: I0307 01:33:18.832047 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-whisker-backend-key-pair\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:18.832288 kubelet[2709]: I0307 01:33:18.832253 2709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqsrw\" (UniqueName: \"kubernetes.io/projected/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca-kube-api-access-kqsrw\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:18.857592 systemd[1]: run-netns-cni\x2d684c2ae6\x2dd140\x2d9dfb\x2daa87\x2d07f31395fb8c.mount: Deactivated successfully. Mar 7 01:33:18.857839 systemd[1]: run-netns-cni\x2da1310df3\x2d898a\x2d7b50\x2d1c03\x2d826602bf327b.mount: Deactivated successfully. Mar 7 01:33:18.861297 systemd[1]: run-netns-cni\x2d3519e2b9\x2ddad1\x2d536d\x2dbf91\x2d04076e4c0c89.mount: Deactivated successfully. Mar 7 01:33:18.861505 systemd[1]: run-netns-cni\x2d54cb7e6a\x2d325a\x2d7583\x2d2dbe\x2da24240b152ef.mount: Deactivated successfully. Mar 7 01:33:18.861691 systemd[1]: run-netns-cni\x2d6fa5fcbf\x2de946\x2d8350\x2dd42c\x2d1a374143b659.mount: Deactivated successfully. Mar 7 01:33:18.861885 systemd[1]: var-lib-kubelet-pods-3faa65e7\x2d49f7\x2d49b8\x2d9f60\x2df29ba1f1f4ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkqsrw.mount: Deactivated successfully. Mar 7 01:33:18.862113 systemd[1]: var-lib-kubelet-pods-3faa65e7\x2d49f7\x2d49b8\x2d9f60\x2df29ba1f1f4ca-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.376 [INFO][3905] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.376 [INFO][3905] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" iface="eth0" netns="/var/run/netns/cni-0b670d7d-fb54-67c1-06fc-4e2cc869ecce" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.376 [INFO][3905] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" iface="eth0" netns="/var/run/netns/cni-0b670d7d-fb54-67c1-06fc-4e2cc869ecce" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.404 [INFO][3905] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" iface="eth0" netns="/var/run/netns/cni-0b670d7d-fb54-67c1-06fc-4e2cc869ecce" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.404 [INFO][3905] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.404 [INFO][3905] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.759 [INFO][4040] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.761 [INFO][4040] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.762 [INFO][4040] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.780 [WARNING][4040] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.781 [INFO][4040] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.788 [INFO][4040] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:18.876926 containerd[1558]: 2026-03-07 01:33:18.816 [INFO][3905] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:18.879191 containerd[1558]: time="2026-03-07T01:33:18.879095522Z" level=info msg="TearDown network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\" successfully" Mar 7 01:33:18.879293 containerd[1558]: time="2026-03-07T01:33:18.879277352Z" level=info msg="StopPodSandbox for \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\" returns successfully" Mar 7 01:33:18.880823 kubelet[2709]: E0307 01:33:18.880788 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:18.886885 systemd[1]: run-netns-cni\x2d0b670d7d\x2dfb54\x2d67c1\x2d06fc\x2d4e2cc869ecce.mount: Deactivated successfully. Mar 7 01:33:18.894048 containerd[1558]: time="2026-03-07T01:33:18.891394617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lm86t,Uid:fe0fc753-6131-4c82-a147-0fb13afc44d9,Namespace:kube-system,Attempt:1,}" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.394 [INFO][3941] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.394 [INFO][3941] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" iface="eth0" netns="/var/run/netns/cni-cda76174-4a50-248d-b605-4df7da2d0cb9" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.394 [INFO][3941] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" iface="eth0" netns="/var/run/netns/cni-cda76174-4a50-248d-b605-4df7da2d0cb9" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.405 [INFO][3941] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" iface="eth0" netns="/var/run/netns/cni-cda76174-4a50-248d-b605-4df7da2d0cb9" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.406 [INFO][3941] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.406 [INFO][3941] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.894 [INFO][4045] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.896 [INFO][4045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.896 [INFO][4045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.926 [WARNING][4045] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.926 [INFO][4045] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.930 [INFO][4045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:19.015225 containerd[1558]: 2026-03-07 01:33:18.964 [INFO][3941] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:19.020275 containerd[1558]: time="2026-03-07T01:33:19.016521671Z" level=info msg="TearDown network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\" successfully" Mar 7 01:33:19.020275 containerd[1558]: time="2026-03-07T01:33:19.016558381Z" level=info msg="StopPodSandbox for \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\" returns successfully" Mar 7 01:33:19.027344 systemd[1]: run-netns-cni\x2dcda76174\x2d4a50\x2d248d\x2db605\x2d4df7da2d0cb9.mount: Deactivated successfully. Mar 7 01:33:19.036407 containerd[1558]: time="2026-03-07T01:33:19.036277624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-6sjb8,Uid:a05ad463-2b61-48be-ab34-432b9b18b36f,Namespace:calico-system,Attempt:1,}" Mar 7 01:33:19.246635 kubelet[2709]: I0307 01:33:19.243921 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d59dd54b-2194-49c2-a181-648b8af867fb-nginx-config\") pod \"whisker-744b5f585-4xgb5\" (UID: \"d59dd54b-2194-49c2-a181-648b8af867fb\") " pod="calico-system/whisker-744b5f585-4xgb5" Mar 7 01:33:19.246635 kubelet[2709]: I0307 01:33:19.243993 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d59dd54b-2194-49c2-a181-648b8af867fb-whisker-backend-key-pair\") pod \"whisker-744b5f585-4xgb5\" (UID: \"d59dd54b-2194-49c2-a181-648b8af867fb\") " pod="calico-system/whisker-744b5f585-4xgb5" Mar 7 01:33:19.246635 kubelet[2709]: I0307 01:33:19.244208 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d59dd54b-2194-49c2-a181-648b8af867fb-whisker-ca-bundle\") pod \"whisker-744b5f585-4xgb5\" (UID: \"d59dd54b-2194-49c2-a181-648b8af867fb\") " pod="calico-system/whisker-744b5f585-4xgb5" Mar 7 01:33:19.246635 kubelet[2709]: I0307 01:33:19.244243 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvgp\" (UniqueName: \"kubernetes.io/projected/d59dd54b-2194-49c2-a181-648b8af867fb-kube-api-access-chvgp\") pod \"whisker-744b5f585-4xgb5\" (UID: \"d59dd54b-2194-49c2-a181-648b8af867fb\") " pod="calico-system/whisker-744b5f585-4xgb5" Mar 7 01:33:19.367627 systemd-networkd[1233]: calic1411b8d4c3: Link UP Mar 7 01:33:19.374047 systemd-networkd[1233]: calic1411b8d4c3: Gained carrier Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:18.610 [ERROR][4043] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:18.747 [INFO][4043] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0 goldmane-5b85766d88- calico-system 82ecbaaa-38d2-47ca-8766-21b7ed9556a7 876 0 2026-03-07 01:33:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-238-171-132 goldmane-5b85766d88-dxh9d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic1411b8d4c3 [] [] }} ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:18.747 [INFO][4043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.067 [INFO][4192] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" HandleID="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.104 [INFO][4192] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" HandleID="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000464360), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-132", "pod":"goldmane-5b85766d88-dxh9d", "timestamp":"2026-03-07 01:33:19.067036784 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000436420)} Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.105 [INFO][4192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.110 [INFO][4192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.111 [INFO][4192] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.125 [INFO][4192] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.171 [INFO][4192] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.207 [INFO][4192] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.226 [INFO][4192] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.248 [INFO][4192] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.248 [INFO][4192] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.254 [INFO][4192] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2 Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.283 [INFO][4192] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.307 [INFO][4192] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.66/26] block=192.168.121.64/26 handle="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.308 [INFO][4192] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.66/26] handle="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" host="172-238-171-132" Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.313 [INFO][4192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:19.420562 containerd[1558]: 2026-03-07 01:33:19.318 [INFO][4192] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.66/26] IPv6=[] ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" HandleID="k8s-pod-network.bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:19.421868 containerd[1558]: 2026-03-07 01:33:19.359 [INFO][4043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"82ecbaaa-38d2-47ca-8766-21b7ed9556a7", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"goldmane-5b85766d88-dxh9d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1411b8d4c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.421868 containerd[1558]: 2026-03-07 01:33:19.359 [INFO][4043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.66/32] ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:19.421868 containerd[1558]: 2026-03-07 01:33:19.359 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1411b8d4c3 ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:19.421868 containerd[1558]: 2026-03-07 01:33:19.368 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:19.421868 containerd[1558]: 2026-03-07 01:33:19.372 [INFO][4043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"82ecbaaa-38d2-47ca-8766-21b7ed9556a7", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2", Pod:"goldmane-5b85766d88-dxh9d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1411b8d4c3", MAC:"c6:a5:da:c1:12:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.421868 containerd[1558]: 2026-03-07 01:33:19.390 [INFO][4043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2" Namespace="calico-system" Pod="goldmane-5b85766d88-dxh9d" WorkloadEndpoint="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:19.438918 systemd-networkd[1233]: calic1c2353176f: Link UP Mar 7 01:33:19.441938 systemd-networkd[1233]: calic1c2353176f: Gained carrier Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:18.739 [ERROR][4052] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:18.790 [INFO][4052] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0 calico-kube-controllers-666c98579- calico-system ff997689-ed72-4f2b-ad6a-78b32cbaabf3 878 0 2026-03-07 01:33:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:666c98579 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-238-171-132 calico-kube-controllers-666c98579-qxmzh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1c2353176f [] [] }} ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:18.793 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.063 [INFO][4205] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" HandleID="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.111 [INFO][4205] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" HandleID="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f9e50), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-132", "pod":"calico-kube-controllers-666c98579-qxmzh", "timestamp":"2026-03-07 01:33:19.063045175 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186000)} Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.111 [INFO][4205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.308 [INFO][4205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.309 [INFO][4205] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.321 [INFO][4205] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.341 [INFO][4205] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.382 [INFO][4205] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.389 [INFO][4205] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.392 [INFO][4205] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.392 [INFO][4205] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.394 [INFO][4205] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81 Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.404 [INFO][4205] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.415 [INFO][4205] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.67/26] block=192.168.121.64/26 handle="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.415 [INFO][4205] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.67/26] handle="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" host="172-238-171-132" Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.415 [INFO][4205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:19.504716 containerd[1558]: 2026-03-07 01:33:19.415 [INFO][4205] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.67/26] IPv6=[] ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" HandleID="k8s-pod-network.6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:19.506300 containerd[1558]: 2026-03-07 01:33:19.425 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0", GenerateName:"calico-kube-controllers-666c98579-", Namespace:"calico-system", SelfLink:"", UID:"ff997689-ed72-4f2b-ad6a-78b32cbaabf3", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666c98579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"calico-kube-controllers-666c98579-qxmzh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1c2353176f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.506300 containerd[1558]: 2026-03-07 01:33:19.425 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.67/32] ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:19.506300 containerd[1558]: 2026-03-07 01:33:19.425 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1c2353176f ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:19.506300 containerd[1558]: 2026-03-07 01:33:19.443 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:19.506300 containerd[1558]: 2026-03-07 01:33:19.467 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0", GenerateName:"calico-kube-controllers-666c98579-", Namespace:"calico-system", SelfLink:"", UID:"ff997689-ed72-4f2b-ad6a-78b32cbaabf3", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666c98579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81", Pod:"calico-kube-controllers-666c98579-qxmzh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1c2353176f", MAC:"f2:4d:66:3d:4f:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.506300 containerd[1558]: 2026-03-07 01:33:19.490 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81" Namespace="calico-system" Pod="calico-kube-controllers-666c98579-qxmzh" WorkloadEndpoint="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:19.535990 containerd[1558]: time="2026-03-07T01:33:19.535219031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-744b5f585-4xgb5,Uid:d59dd54b-2194-49c2-a181-648b8af867fb,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:19.648442 systemd-networkd[1233]: cali4b02ce7dbae: Link UP Mar 7 01:33:19.650166 systemd-networkd[1233]: cali4b02ce7dbae: Gained carrier Mar 7 01:33:19.677282 systemd-networkd[1233]: cali6827f7f2762: Gained IPv6LL Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.026 [ERROR][4132] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.146 [INFO][4132] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0 coredns-674b8bbfcf- kube-system ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba 879 0 2026-03-07 01:32:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-171-132 coredns-674b8bbfcf-vdbpr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b02ce7dbae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.147 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.457 [INFO][4261] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" HandleID="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.473 [INFO][4261] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" HandleID="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103e50), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-171-132", "pod":"coredns-674b8bbfcf-vdbpr", "timestamp":"2026-03-07 01:33:19.457799426 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00028da20)} Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.473 [INFO][4261] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.473 [INFO][4261] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.473 [INFO][4261] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.502 [INFO][4261] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.510 [INFO][4261] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.522 [INFO][4261] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.534 [INFO][4261] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.550 [INFO][4261] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.550 [INFO][4261] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.553 [INFO][4261] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343 Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.593 [INFO][4261] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.605 [INFO][4261] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.68/26] block=192.168.121.64/26 handle="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.605 [INFO][4261] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.68/26] handle="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" host="172-238-171-132" Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.605 [INFO][4261] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:19.717973 containerd[1558]: 2026-03-07 01:33:19.605 [INFO][4261] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.68/26] IPv6=[] ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" HandleID="k8s-pod-network.d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:19.719054 containerd[1558]: 2026-03-07 01:33:19.641 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"coredns-674b8bbfcf-vdbpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b02ce7dbae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.719054 containerd[1558]: 2026-03-07 01:33:19.641 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.68/32] ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:19.719054 containerd[1558]: 2026-03-07 01:33:19.641 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b02ce7dbae ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:19.719054 containerd[1558]: 2026-03-07 01:33:19.653 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:19.719054 containerd[1558]: 2026-03-07 01:33:19.656 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343", Pod:"coredns-674b8bbfcf-vdbpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b02ce7dbae", MAC:"82:bb:58:71:b3:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.719054 containerd[1558]: 2026-03-07 01:33:19.705 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343" Namespace="kube-system" Pod="coredns-674b8bbfcf-vdbpr" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:19.724824 containerd[1558]: time="2026-03-07T01:33:19.723946009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:19.724824 containerd[1558]: time="2026-03-07T01:33:19.724038349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:19.724824 containerd[1558]: time="2026-03-07T01:33:19.724056069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:19.726767 containerd[1558]: time="2026-03-07T01:33:19.725667009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:19.736719 containerd[1558]: time="2026-03-07T01:33:19.736582316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:19.737050 containerd[1558]: time="2026-03-07T01:33:19.736899415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:19.737259 containerd[1558]: time="2026-03-07T01:33:19.737223115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:19.739847 containerd[1558]: time="2026-03-07T01:33:19.739420354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:19.773411 systemd-networkd[1233]: cali0eda48199d3: Link UP Mar 7 01:33:19.777299 systemd-networkd[1233]: cali0eda48199d3: Gained carrier Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.331 [ERROR][4241] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.394 [INFO][4241] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0 calico-apiserver-5966b74f89- calico-system a05ad463-2b61-48be-ab34-432b9b18b36f 882 0 2026-03-07 01:33:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5966b74f89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-171-132 calico-apiserver-5966b74f89-6sjb8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0eda48199d3 [] [] }} ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.394 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.630 [INFO][4281] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" HandleID="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.695 [INFO][4281] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" HandleID="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000381980), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-132", "pod":"calico-apiserver-5966b74f89-6sjb8", "timestamp":"2026-03-07 01:33:19.63053776 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000636420)} Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.695 [INFO][4281] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.695 [INFO][4281] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.695 [INFO][4281] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.699 [INFO][4281] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.705 [INFO][4281] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.718 [INFO][4281] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.723 [INFO][4281] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.729 [INFO][4281] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.729 [INFO][4281] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.734 [INFO][4281] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84 Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.740 [INFO][4281] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.747 [INFO][4281] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.69/26] block=192.168.121.64/26 handle="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.747 [INFO][4281] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.69/26] handle="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" host="172-238-171-132" Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.747 [INFO][4281] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:19.804546 containerd[1558]: 2026-03-07 01:33:19.747 [INFO][4281] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.69/26] IPv6=[] ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" HandleID="k8s-pod-network.0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.805814 containerd[1558]: 2026-03-07 01:33:19.763 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"a05ad463-2b61-48be-ab34-432b9b18b36f", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"calico-apiserver-5966b74f89-6sjb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0eda48199d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.805814 containerd[1558]: 2026-03-07 01:33:19.763 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.69/32] ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.805814 containerd[1558]: 2026-03-07 01:33:19.763 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0eda48199d3 ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.805814 containerd[1558]: 2026-03-07 01:33:19.777 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.805814 containerd[1558]: 2026-03-07 01:33:19.777 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"a05ad463-2b61-48be-ab34-432b9b18b36f", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84", Pod:"calico-apiserver-5966b74f89-6sjb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0eda48199d3", MAC:"36:f7:5c:dd:0b:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.805814 containerd[1558]: 2026-03-07 01:33:19.793 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-6sjb8" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:19.882768 containerd[1558]: time="2026-03-07T01:33:19.881448998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:19.882768 containerd[1558]: time="2026-03-07T01:33:19.882123917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:19.882768 containerd[1558]: time="2026-03-07T01:33:19.882144137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:19.884166 containerd[1558]: time="2026-03-07T01:33:19.884050647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:19.927267 systemd-networkd[1233]: cali35952829b2f: Link UP Mar 7 01:33:19.928886 systemd-networkd[1233]: cali35952829b2f: Gained carrier Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.355 [ERROR][4219] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.421 [INFO][4219] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0 coredns-674b8bbfcf- kube-system fe0fc753-6131-4c82-a147-0fb13afc44d9 881 0 2026-03-07 01:32:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-171-132 coredns-674b8bbfcf-lm86t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35952829b2f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.421 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.670 [INFO][4294] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" HandleID="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.715 [INFO][4294] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" HandleID="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c5af0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-171-132", "pod":"coredns-674b8bbfcf-lm86t", "timestamp":"2026-03-07 01:33:19.670397426 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006266e0)} Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.715 [INFO][4294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.748 [INFO][4294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.748 [INFO][4294] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.800 [INFO][4294] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.811 [INFO][4294] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.835 [INFO][4294] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.840 [INFO][4294] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.852 [INFO][4294] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.854 [INFO][4294] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.861 [INFO][4294] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008 Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.877 [INFO][4294] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.887 [INFO][4294] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.70/26] block=192.168.121.64/26 handle="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.887 [INFO][4294] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.70/26] handle="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" host="172-238-171-132" Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.887 [INFO][4294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:19.972999 containerd[1558]: 2026-03-07 01:33:19.887 [INFO][4294] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.70/26] IPv6=[] ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" HandleID="k8s-pod-network.316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:19.973789 containerd[1558]: 2026-03-07 01:33:19.917 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe0fc753-6131-4c82-a147-0fb13afc44d9", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"coredns-674b8bbfcf-lm86t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35952829b2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.973789 containerd[1558]: 2026-03-07 01:33:19.917 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.70/32] ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:19.973789 containerd[1558]: 2026-03-07 01:33:19.917 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35952829b2f ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:19.973789 containerd[1558]: 2026-03-07 01:33:19.931 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:19.973789 containerd[1558]: 2026-03-07 01:33:19.932 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe0fc753-6131-4c82-a147-0fb13afc44d9", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008", Pod:"coredns-674b8bbfcf-lm86t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35952829b2f", MAC:"a2:6e:f1:97:f7:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:19.973789 containerd[1558]: 2026-03-07 01:33:19.958 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008" Namespace="kube-system" Pod="coredns-674b8bbfcf-lm86t" WorkloadEndpoint="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:20.013110 containerd[1558]: time="2026-03-07T01:33:20.009835677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:20.013110 containerd[1558]: time="2026-03-07T01:33:20.009895686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:20.013110 containerd[1558]: time="2026-03-07T01:33:20.010483416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.013110 containerd[1558]: time="2026-03-07T01:33:20.010585456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.065372 systemd-networkd[1233]: cali441f30456c6: Link UP Mar 7 01:33:20.073469 systemd-networkd[1233]: cali441f30456c6: Gained carrier Mar 7 01:33:20.148063 containerd[1558]: time="2026-03-07T01:33:20.143887680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:20.148063 containerd[1558]: time="2026-03-07T01:33:20.144038640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:20.148063 containerd[1558]: time="2026-03-07T01:33:20.144059350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.148063 containerd[1558]: time="2026-03-07T01:33:20.145432600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.201 [ERROR][4198] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.325 [INFO][4198] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0 calico-apiserver-5966b74f89- calico-system b74ff1db-535d-4184-94d2-59a40d15c8c9 883 0 2026-03-07 01:33:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5966b74f89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-171-132 calico-apiserver-5966b74f89-mgnnz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali441f30456c6 [] [] }} ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.334 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.754 [INFO][4279] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" HandleID="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.773 [INFO][4279] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" HandleID="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041a260), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-132", "pod":"calico-apiserver-5966b74f89-mgnnz", "timestamp":"2026-03-07 01:33:19.754446289 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c4420)} Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.774 [INFO][4279] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.890 [INFO][4279] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.891 [INFO][4279] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.901 [INFO][4279] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.915 [INFO][4279] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.927 [INFO][4279] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.932 [INFO][4279] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.944 [INFO][4279] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.944 [INFO][4279] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.962 [INFO][4279] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434 Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.976 [INFO][4279] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.998 [INFO][4279] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.71/26] block=192.168.121.64/26 handle="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.999 [INFO][4279] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.71/26] handle="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" host="172-238-171-132" Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:19.999 [INFO][4279] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:20.170932 containerd[1558]: 2026-03-07 01:33:20.001 [INFO][4279] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.71/26] IPv6=[] ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" HandleID="k8s-pod-network.051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:20.171570 containerd[1558]: 2026-03-07 01:33:20.033 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"b74ff1db-535d-4184-94d2-59a40d15c8c9", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"calico-apiserver-5966b74f89-mgnnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali441f30456c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:20.171570 containerd[1558]: 2026-03-07 01:33:20.033 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.71/32] ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:20.171570 containerd[1558]: 2026-03-07 01:33:20.033 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali441f30456c6 ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:20.171570 containerd[1558]: 2026-03-07 01:33:20.079 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:20.171570 containerd[1558]: 2026-03-07 01:33:20.088 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"b74ff1db-535d-4184-94d2-59a40d15c8c9", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434", Pod:"calico-apiserver-5966b74f89-mgnnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali441f30456c6", MAC:"be:71:6a:dc:03:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:20.171570 containerd[1558]: 2026-03-07 01:33:20.142 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434" Namespace="calico-system" Pod="calico-apiserver-5966b74f89-mgnnz" WorkloadEndpoint="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:20.187873 containerd[1558]: time="2026-03-07T01:33:20.187517408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666c98579-qxmzh,Uid:ff997689-ed72-4f2b-ad6a-78b32cbaabf3,Namespace:calico-system,Attempt:1,} returns sandbox id \"6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81\"" Mar 7 01:33:20.211058 containerd[1558]: time="2026-03-07T01:33:20.211002662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-dxh9d,Uid:82ecbaaa-38d2-47ca-8766-21b7ed9556a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2\"" Mar 7 01:33:20.235931 containerd[1558]: time="2026-03-07T01:33:20.235884584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vdbpr,Uid:ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343\"" Mar 7 01:33:20.237787 kubelet[2709]: E0307 01:33:20.237233 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:20.245340 containerd[1558]: time="2026-03-07T01:33:20.245304082Z" level=info msg="CreateContainer within sandbox \"d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:33:20.262164 containerd[1558]: time="2026-03-07T01:33:20.261539988Z" level=info msg="CreateContainer within sandbox \"d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d046563fa2c5f482dff94df6bfca7d7fd1b892a9dbd2cc601fb7b730682b09c\"" Mar 7 01:33:20.267928 containerd[1558]: time="2026-03-07T01:33:20.267124756Z" level=info msg="StartContainer for \"9d046563fa2c5f482dff94df6bfca7d7fd1b892a9dbd2cc601fb7b730682b09c\"" Mar 7 01:33:20.281735 containerd[1558]: time="2026-03-07T01:33:20.281300722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:20.281735 containerd[1558]: time="2026-03-07T01:33:20.281353782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:20.281735 containerd[1558]: time="2026-03-07T01:33:20.281367642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.281735 containerd[1558]: time="2026-03-07T01:33:20.281461032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.307246 containerd[1558]: time="2026-03-07T01:33:20.307212435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-6sjb8,Uid:a05ad463-2b61-48be-ab34-432b9b18b36f,Namespace:calico-system,Attempt:1,} returns sandbox id \"0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84\"" Mar 7 01:33:20.334931 containerd[1558]: time="2026-03-07T01:33:20.334811497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lm86t,Uid:fe0fc753-6131-4c82-a147-0fb13afc44d9,Namespace:kube-system,Attempt:1,} returns sandbox id \"316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008\"" Mar 7 01:33:20.339491 kubelet[2709]: E0307 01:33:20.338243 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:20.346852 containerd[1558]: time="2026-03-07T01:33:20.346801834Z" level=info msg="CreateContainer within sandbox \"316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:33:20.371948 systemd-networkd[1233]: calie344121ca92: Link UP Mar 7 01:33:20.374421 systemd-networkd[1233]: calie344121ca92: Gained carrier Mar 7 01:33:20.387737 containerd[1558]: time="2026-03-07T01:33:20.387537003Z" level=info msg="CreateContainer within sandbox \"316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20e29a9a30bcc3938c5cdb916f45a62d474a28e262a3dd6a5da21e68faea713c\"" Mar 7 01:33:20.399315 containerd[1558]: time="2026-03-07T01:33:20.399276880Z" level=info msg="StartContainer for \"20e29a9a30bcc3938c5cdb916f45a62d474a28e262a3dd6a5da21e68faea713c\"" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:19.809 [ERROR][4330] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:19.827 [INFO][4330] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0 whisker-744b5f585- calico-system d59dd54b-2194-49c2-a181-648b8af867fb 905 0 2026-03-07 01:33:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:744b5f585 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-238-171-132 whisker-744b5f585-4xgb5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie344121ca92 [] [] }} ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:19.827 [INFO][4330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.285 [INFO][4433] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" HandleID="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Workload="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.297 [INFO][4433] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" HandleID="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Workload="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035d7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-132", "pod":"whisker-744b5f585-4xgb5", "timestamp":"2026-03-07 01:33:20.285589881 +0000 UTC"}, Hostname:"172-238-171-132", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000292dc0)} Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.297 [INFO][4433] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.297 [INFO][4433] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.297 [INFO][4433] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-132' Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.302 [INFO][4433] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.308 [INFO][4433] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.317 [INFO][4433] ipam/ipam.go 526: Trying affinity for 192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.319 [INFO][4433] ipam/ipam.go 160: Attempting to load block cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.323 [INFO][4433] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.323 [INFO][4433] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.326 [INFO][4433] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24 Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.334 [INFO][4433] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.346 [INFO][4433] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.121.72/26] block=192.168.121.64/26 handle="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.346 [INFO][4433] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.121.72/26] handle="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" host="172-238-171-132" Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.347 [INFO][4433] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:20.422777 containerd[1558]: 2026-03-07 01:33:20.347 [INFO][4433] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.121.72/26] IPv6=[] ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" HandleID="k8s-pod-network.05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Workload="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" Mar 7 01:33:20.426505 containerd[1558]: 2026-03-07 01:33:20.365 [INFO][4330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0", GenerateName:"whisker-744b5f585-", Namespace:"calico-system", SelfLink:"", UID:"d59dd54b-2194-49c2-a181-648b8af867fb", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"744b5f585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"", Pod:"whisker-744b5f585-4xgb5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie344121ca92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:20.426505 containerd[1558]: 2026-03-07 01:33:20.365 [INFO][4330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.72/32] ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" Mar 7 01:33:20.426505 containerd[1558]: 2026-03-07 01:33:20.365 [INFO][4330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie344121ca92 ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" Mar 7 01:33:20.426505 containerd[1558]: 2026-03-07 01:33:20.376 [INFO][4330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" Mar 7 01:33:20.426505 containerd[1558]: 2026-03-07 01:33:20.377 [INFO][4330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0", GenerateName:"whisker-744b5f585-", Namespace:"calico-system", SelfLink:"", UID:"d59dd54b-2194-49c2-a181-648b8af867fb", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"744b5f585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24", Pod:"whisker-744b5f585-4xgb5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie344121ca92", MAC:"c2:ec:a9:b2:d5:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:20.426505 containerd[1558]: 2026-03-07 01:33:20.398 [INFO][4330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24" Namespace="calico-system" Pod="whisker-744b5f585-4xgb5" WorkloadEndpoint="172--238--171--132-k8s-whisker--744b5f585--4xgb5-eth0" Mar 7 01:33:20.565786 containerd[1558]: time="2026-03-07T01:33:20.565406624Z" level=info msg="StartContainer for \"9d046563fa2c5f482dff94df6bfca7d7fd1b892a9dbd2cc601fb7b730682b09c\" returns successfully" Mar 7 01:33:20.571099 systemd-networkd[1233]: calic1c2353176f: Gained IPv6LL Mar 7 01:33:20.614190 containerd[1558]: time="2026-03-07T01:33:20.613531391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5966b74f89-mgnnz,Uid:b74ff1db-535d-4184-94d2-59a40d15c8c9,Namespace:calico-system,Attempt:1,} returns sandbox id \"051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434\"" Mar 7 01:33:20.618670 containerd[1558]: time="2026-03-07T01:33:20.618369049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:20.624165 containerd[1558]: time="2026-03-07T01:33:20.621387009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:20.624165 containerd[1558]: time="2026-03-07T01:33:20.621407929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.624165 containerd[1558]: time="2026-03-07T01:33:20.621515459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:20.711889 containerd[1558]: time="2026-03-07T01:33:20.710051184Z" level=info msg="StartContainer for \"20e29a9a30bcc3938c5cdb916f45a62d474a28e262a3dd6a5da21e68faea713c\" returns successfully" Mar 7 01:33:20.817392 kubelet[2709]: I0307 01:33:20.817002 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca" path="/var/lib/kubelet/pods/3faa65e7-49f7-49b8-9f60-f29ba1f1f4ca/volumes" Mar 7 01:33:20.876261 containerd[1558]: time="2026-03-07T01:33:20.875061479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:20.878095 containerd[1558]: time="2026-03-07T01:33:20.878057248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-744b5f585-4xgb5,Uid:d59dd54b-2194-49c2-a181-648b8af867fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24\"" Mar 7 01:33:20.878225 containerd[1558]: time="2026-03-07T01:33:20.878192238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:33:20.878524 containerd[1558]: time="2026-03-07T01:33:20.878472418Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:20.882998 containerd[1558]: time="2026-03-07T01:33:20.882967156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:20.884680 containerd[1558]: time="2026-03-07T01:33:20.884650756Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.3043312s" Mar 7 01:33:20.884734 containerd[1558]: time="2026-03-07T01:33:20.884679136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:33:20.887089 containerd[1558]: time="2026-03-07T01:33:20.887047355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:33:20.892941 containerd[1558]: time="2026-03-07T01:33:20.892885354Z" level=info msg="CreateContainer within sandbox \"244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:33:20.925959 containerd[1558]: time="2026-03-07T01:33:20.925902355Z" level=info msg="CreateContainer within sandbox \"244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e3f6b98f59b6169e80a235d2b3bed4fc0aa411930eae55e745a03fa04a241b6e\"" Mar 7 01:33:20.926977 containerd[1558]: time="2026-03-07T01:33:20.926942374Z" level=info msg="StartContainer for \"e3f6b98f59b6169e80a235d2b3bed4fc0aa411930eae55e745a03fa04a241b6e\"" Mar 7 01:33:21.025270 containerd[1558]: time="2026-03-07T01:33:21.025224988Z" level=info msg="StartContainer for \"e3f6b98f59b6169e80a235d2b3bed4fc0aa411930eae55e745a03fa04a241b6e\" returns successfully" Mar 7 01:33:21.080811 kubelet[2709]: E0307 01:33:21.080120 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:21.097120 kubelet[2709]: E0307 01:33:21.097058 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:21.105586 kubelet[2709]: I0307 01:33:21.105107 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vdbpr" podStartSLOduration=25.105088340000002 podStartE2EDuration="25.10508834s" podCreationTimestamp="2026-03-07 01:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:33:21.103415741 +0000 UTC m=+30.400728744" watchObservedRunningTime="2026-03-07 01:33:21.10508834 +0000 UTC m=+30.402401343" Mar 7 01:33:21.139016 kubelet[2709]: I0307 01:33:21.134482 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lm86t" podStartSLOduration=25.134467674 podStartE2EDuration="25.134467674s" podCreationTimestamp="2026-03-07 01:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:33:21.130500415 +0000 UTC m=+30.427813428" watchObservedRunningTime="2026-03-07 01:33:21.134467674 +0000 UTC m=+30.431780667" Mar 7 01:33:21.151985 systemd-networkd[1233]: cali4b02ce7dbae: Gained IPv6LL Mar 7 01:33:21.154029 systemd-networkd[1233]: cali35952829b2f: Gained IPv6LL Mar 7 01:33:21.275445 systemd-networkd[1233]: cali0eda48199d3: Gained IPv6LL Mar 7 01:33:21.279283 systemd-networkd[1233]: cali441f30456c6: Gained IPv6LL Mar 7 01:33:21.340288 systemd-networkd[1233]: calic1411b8d4c3: Gained IPv6LL Mar 7 01:33:21.853952 systemd-networkd[1233]: calie344121ca92: Gained IPv6LL Mar 7 01:33:22.156130 kubelet[2709]: E0307 01:33:22.156010 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:22.157335 kubelet[2709]: E0307 01:33:22.157284 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:22.328199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290953757.mount: Deactivated successfully. Mar 7 01:33:22.772722 containerd[1558]: time="2026-03-07T01:33:22.772413688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:22.773867 containerd[1558]: time="2026-03-07T01:33:22.773388918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:33:22.774550 containerd[1558]: time="2026-03-07T01:33:22.774522467Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:22.777486 containerd[1558]: time="2026-03-07T01:33:22.777438797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:22.779310 containerd[1558]: time="2026-03-07T01:33:22.778744458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.891586763s" Mar 7 01:33:22.779310 containerd[1558]: time="2026-03-07T01:33:22.778988757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:33:22.781946 containerd[1558]: time="2026-03-07T01:33:22.781926896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:33:22.786973 containerd[1558]: time="2026-03-07T01:33:22.785917276Z" level=info msg="CreateContainer within sandbox \"bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:33:22.795968 containerd[1558]: time="2026-03-07T01:33:22.794818005Z" level=info msg="CreateContainer within sandbox \"bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"46af84f7c6d074051a120c7d7fd2c6d1641c6bbc7b3c2600427d4c1d5d991f07\"" Mar 7 01:33:22.796537 containerd[1558]: time="2026-03-07T01:33:22.796486303Z" level=info msg="StartContainer for \"46af84f7c6d074051a120c7d7fd2c6d1641c6bbc7b3c2600427d4c1d5d991f07\"" Mar 7 01:33:22.841519 systemd[1]: run-containerd-runc-k8s.io-46af84f7c6d074051a120c7d7fd2c6d1641c6bbc7b3c2600427d4c1d5d991f07-runc.krcpJX.mount: Deactivated successfully. Mar 7 01:33:22.906398 containerd[1558]: time="2026-03-07T01:33:22.906269044Z" level=info msg="StartContainer for \"46af84f7c6d074051a120c7d7fd2c6d1641c6bbc7b3c2600427d4c1d5d991f07\" returns successfully" Mar 7 01:33:23.160016 kubelet[2709]: E0307 01:33:23.159941 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:23.161441 kubelet[2709]: E0307 01:33:23.161408 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:23.171616 kubelet[2709]: I0307 01:33:23.170967 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-dxh9d" podStartSLOduration=14.611299876 podStartE2EDuration="17.170952914s" podCreationTimestamp="2026-03-07 01:33:06 +0000 UTC" firstStartedPulling="2026-03-07 01:33:20.222142408 +0000 UTC m=+29.519455401" lastFinishedPulling="2026-03-07 01:33:22.781795436 +0000 UTC m=+32.079108439" observedRunningTime="2026-03-07 01:33:23.169851964 +0000 UTC m=+32.467164957" watchObservedRunningTime="2026-03-07 01:33:23.170952914 +0000 UTC m=+32.468265907" Mar 7 01:33:24.163332 kubelet[2709]: I0307 01:33:24.163282 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:24.699313 containerd[1558]: time="2026-03-07T01:33:24.699259519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:24.700262 containerd[1558]: time="2026-03-07T01:33:24.700220470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:33:24.700777 containerd[1558]: time="2026-03-07T01:33:24.700737829Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:24.703929 containerd[1558]: time="2026-03-07T01:33:24.703602059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:24.705476 containerd[1558]: time="2026-03-07T01:33:24.704419069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.922398713s" Mar 7 01:33:24.705476 containerd[1558]: time="2026-03-07T01:33:24.704453979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:33:24.706063 containerd[1558]: time="2026-03-07T01:33:24.706000309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:33:24.727348 containerd[1558]: time="2026-03-07T01:33:24.727222247Z" level=info msg="CreateContainer within sandbox \"6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:33:24.743170 containerd[1558]: time="2026-03-07T01:33:24.743127225Z" level=info msg="CreateContainer within sandbox \"6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"739a81f89d0fb4cefd3d11f48cb86fe1c9623d93fe75b004fe817520352f2666\"" Mar 7 01:33:24.744740 containerd[1558]: time="2026-03-07T01:33:24.744570015Z" level=info msg="StartContainer for \"739a81f89d0fb4cefd3d11f48cb86fe1c9623d93fe75b004fe817520352f2666\"" Mar 7 01:33:24.749029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445382176.mount: Deactivated successfully. Mar 7 01:33:24.871477 containerd[1558]: time="2026-03-07T01:33:24.871390972Z" level=info msg="StartContainer for \"739a81f89d0fb4cefd3d11f48cb86fe1c9623d93fe75b004fe817520352f2666\" returns successfully" Mar 7 01:33:25.210555 kubelet[2709]: I0307 01:33:25.210503 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-666c98579-qxmzh" podStartSLOduration=15.727841557 podStartE2EDuration="20.210484207s" podCreationTimestamp="2026-03-07 01:33:05 +0000 UTC" firstStartedPulling="2026-03-07 01:33:20.223208048 +0000 UTC m=+29.520521051" lastFinishedPulling="2026-03-07 01:33:24.705850708 +0000 UTC m=+34.003163701" observedRunningTime="2026-03-07 01:33:25.206634958 +0000 UTC m=+34.503947981" watchObservedRunningTime="2026-03-07 01:33:25.210484207 +0000 UTC m=+34.507797220" Mar 7 01:33:26.190951 kubelet[2709]: I0307 01:33:26.190868 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:26.611988 containerd[1558]: time="2026-03-07T01:33:26.611248103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:26.613175 containerd[1558]: time="2026-03-07T01:33:26.613098332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:33:26.615460 containerd[1558]: time="2026-03-07T01:33:26.614073782Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:26.619928 containerd[1558]: time="2026-03-07T01:33:26.619245902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:26.621602 containerd[1558]: time="2026-03-07T01:33:26.621306602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.915271643s" Mar 7 01:33:26.621602 containerd[1558]: time="2026-03-07T01:33:26.621438972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:33:26.625099 containerd[1558]: time="2026-03-07T01:33:26.625066942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:33:26.636111 containerd[1558]: time="2026-03-07T01:33:26.636063792Z" level=info msg="CreateContainer within sandbox \"0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:33:26.656183 containerd[1558]: time="2026-03-07T01:33:26.655755451Z" level=info msg="CreateContainer within sandbox \"0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"765871e055c79923aafcea959ab391666405371c6da4e6d12e9472fc3f04c34c\"" Mar 7 01:33:26.656969 containerd[1558]: time="2026-03-07T01:33:26.656787271Z" level=info msg="StartContainer for \"765871e055c79923aafcea959ab391666405371c6da4e6d12e9472fc3f04c34c\"" Mar 7 01:33:26.778511 containerd[1558]: time="2026-03-07T01:33:26.778450278Z" level=info msg="StartContainer for \"765871e055c79923aafcea959ab391666405371c6da4e6d12e9472fc3f04c34c\" returns successfully" Mar 7 01:33:26.827270 containerd[1558]: time="2026-03-07T01:33:26.827228826Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:26.830466 containerd[1558]: time="2026-03-07T01:33:26.830416826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:33:26.844423 containerd[1558]: time="2026-03-07T01:33:26.844367886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 219.240114ms" Mar 7 01:33:26.844423 containerd[1558]: time="2026-03-07T01:33:26.844401386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:33:26.847369 containerd[1558]: time="2026-03-07T01:33:26.847210386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:33:26.853886 containerd[1558]: time="2026-03-07T01:33:26.852149026Z" level=info msg="CreateContainer within sandbox \"051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:33:26.881762 containerd[1558]: time="2026-03-07T01:33:26.881529736Z" level=info msg="CreateContainer within sandbox \"051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"168d380fb8add8ad7de2c443acc26bfba1e002fd68b3c283897a37f860be7739\"" Mar 7 01:33:26.882216 containerd[1558]: time="2026-03-07T01:33:26.882186565Z" level=info msg="StartContainer for \"168d380fb8add8ad7de2c443acc26bfba1e002fd68b3c283897a37f860be7739\"" Mar 7 01:33:27.011786 containerd[1558]: time="2026-03-07T01:33:27.011237612Z" level=info msg="StartContainer for \"168d380fb8add8ad7de2c443acc26bfba1e002fd68b3c283897a37f860be7739\" returns successfully" Mar 7 01:33:27.243891 kubelet[2709]: I0307 01:33:27.241663 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5966b74f89-6sjb8" podStartSLOduration=14.929886124 podStartE2EDuration="21.241647493s" podCreationTimestamp="2026-03-07 01:33:06 +0000 UTC" firstStartedPulling="2026-03-07 01:33:20.312296953 +0000 UTC m=+29.609609956" lastFinishedPulling="2026-03-07 01:33:26.624058332 +0000 UTC m=+35.921371325" observedRunningTime="2026-03-07 01:33:27.241165113 +0000 UTC m=+36.538478116" watchObservedRunningTime="2026-03-07 01:33:27.241647493 +0000 UTC m=+36.538960486" Mar 7 01:33:27.243891 kubelet[2709]: I0307 01:33:27.241993 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5966b74f89-mgnnz" podStartSLOduration=15.019452975 podStartE2EDuration="21.241987074s" podCreationTimestamp="2026-03-07 01:33:06 +0000 UTC" firstStartedPulling="2026-03-07 01:33:20.622993798 +0000 UTC m=+29.920306791" lastFinishedPulling="2026-03-07 01:33:26.845527897 +0000 UTC m=+36.142840890" observedRunningTime="2026-03-07 01:33:27.220305134 +0000 UTC m=+36.517618167" watchObservedRunningTime="2026-03-07 01:33:27.241987074 +0000 UTC m=+36.539300067" Mar 7 01:33:27.739945 containerd[1558]: time="2026-03-07T01:33:27.737389497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:27.741358 containerd[1558]: time="2026-03-07T01:33:27.741327167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:33:27.742531 containerd[1558]: time="2026-03-07T01:33:27.742508667Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:27.746395 containerd[1558]: time="2026-03-07T01:33:27.746269037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:27.748184 containerd[1558]: time="2026-03-07T01:33:27.748123147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 900.883951ms" Mar 7 01:33:27.748462 containerd[1558]: time="2026-03-07T01:33:27.748357497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:33:27.752198 kubelet[2709]: I0307 01:33:27.750504 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:27.752198 kubelet[2709]: E0307 01:33:27.751081 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:27.753933 containerd[1558]: time="2026-03-07T01:33:27.753496717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:33:27.761202 containerd[1558]: time="2026-03-07T01:33:27.761178997Z" level=info msg="CreateContainer within sandbox \"05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:33:27.778164 containerd[1558]: time="2026-03-07T01:33:27.778130748Z" level=info msg="CreateContainer within sandbox \"05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5b445dc35524820087a3d59c62b9852786d3983b3f05cac00ba3582f2922e6b8\"" Mar 7 01:33:27.786577 containerd[1558]: time="2026-03-07T01:33:27.782227438Z" level=info msg="StartContainer for \"5b445dc35524820087a3d59c62b9852786d3983b3f05cac00ba3582f2922e6b8\"" Mar 7 01:33:27.792556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803758313.mount: Deactivated successfully. Mar 7 01:33:27.981887 containerd[1558]: time="2026-03-07T01:33:27.981848538Z" level=info msg="StartContainer for \"5b445dc35524820087a3d59c62b9852786d3983b3f05cac00ba3582f2922e6b8\" returns successfully" Mar 7 01:33:28.216036 kubelet[2709]: I0307 01:33:28.216011 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:28.217784 kubelet[2709]: E0307 01:33:28.216544 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:33:28.218034 kubelet[2709]: I0307 01:33:28.218020 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:28.235935 kernel: calico-node[5158]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:33:29.012265 containerd[1558]: time="2026-03-07T01:33:29.011543447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:29.013465 containerd[1558]: time="2026-03-07T01:33:29.013432877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:33:29.014404 containerd[1558]: time="2026-03-07T01:33:29.014384787Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:29.017079 containerd[1558]: time="2026-03-07T01:33:29.017046078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:29.018752 containerd[1558]: time="2026-03-07T01:33:29.018654817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.26423373s" Mar 7 01:33:29.018752 containerd[1558]: time="2026-03-07T01:33:29.018683367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:33:29.021020 containerd[1558]: time="2026-03-07T01:33:29.020696037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:33:29.024074 containerd[1558]: time="2026-03-07T01:33:29.024032838Z" level=info msg="CreateContainer within sandbox \"244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:33:29.048439 containerd[1558]: time="2026-03-07T01:33:29.048406559Z" level=info msg="CreateContainer within sandbox \"244aaeab79c71ef4435506ebe6341b83d48074480f1e17d54655dee8cb974449\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"328dcb1ffd9231009ea8470669990edf014a51a7e7706e73c90799eec24926f6\"" Mar 7 01:33:29.049922 containerd[1558]: time="2026-03-07T01:33:29.049205440Z" level=info msg="StartContainer for \"328dcb1ffd9231009ea8470669990edf014a51a7e7706e73c90799eec24926f6\"" Mar 7 01:33:29.050824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988990003.mount: Deactivated successfully. Mar 7 01:33:29.148041 containerd[1558]: time="2026-03-07T01:33:29.147993196Z" level=info msg="StartContainer for \"328dcb1ffd9231009ea8470669990edf014a51a7e7706e73c90799eec24926f6\" returns successfully" Mar 7 01:33:29.406007 systemd-networkd[1233]: vxlan.calico: Link UP Mar 7 01:33:29.406016 systemd-networkd[1233]: vxlan.calico: Gained carrier Mar 7 01:33:29.928216 kubelet[2709]: I0307 01:33:29.928076 2709 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:33:29.931455 kubelet[2709]: I0307 01:33:29.930839 2709 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:33:30.194097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2141462644.mount: Deactivated successfully. Mar 7 01:33:30.208470 containerd[1558]: time="2026-03-07T01:33:30.207703461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:30.208470 containerd[1558]: time="2026-03-07T01:33:30.208437522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:33:30.210149 containerd[1558]: time="2026-03-07T01:33:30.210106382Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:30.213399 containerd[1558]: time="2026-03-07T01:33:30.212718002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:33:30.213399 containerd[1558]: time="2026-03-07T01:33:30.213298002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.192416955s" Mar 7 01:33:30.213399 containerd[1558]: time="2026-03-07T01:33:30.213324752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:33:30.217957 containerd[1558]: time="2026-03-07T01:33:30.217868723Z" level=info msg="CreateContainer within sandbox \"05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:33:30.241322 containerd[1558]: time="2026-03-07T01:33:30.241286995Z" level=info msg="CreateContainer within sandbox \"05e7a72be37520fe5185c2d96e751333726657d55e1bb064b1299c7d4c6c4c24\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e055a8c0376c36659aeed31e82ae30b755217091379063dc9c055ce6dfe81cac\"" Mar 7 01:33:30.241930 containerd[1558]: time="2026-03-07T01:33:30.241826525Z" level=info msg="StartContainer for \"e055a8c0376c36659aeed31e82ae30b755217091379063dc9c055ce6dfe81cac\"" Mar 7 01:33:30.336192 containerd[1558]: time="2026-03-07T01:33:30.336157344Z" level=info msg="StartContainer for \"e055a8c0376c36659aeed31e82ae30b755217091379063dc9c055ce6dfe81cac\" returns successfully" Mar 7 01:33:30.939584 systemd-networkd[1233]: vxlan.calico: Gained IPv6LL Mar 7 01:33:31.258421 kubelet[2709]: I0307 01:33:31.255437 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tt5kk" podStartSLOduration=15.815150426 podStartE2EDuration="26.255420267s" podCreationTimestamp="2026-03-07 01:33:05 +0000 UTC" firstStartedPulling="2026-03-07 01:33:18.579596706 +0000 UTC m=+27.876909699" lastFinishedPulling="2026-03-07 01:33:29.019866537 +0000 UTC m=+38.317179540" observedRunningTime="2026-03-07 01:33:29.271897904 +0000 UTC m=+38.569210917" watchObservedRunningTime="2026-03-07 01:33:31.255420267 +0000 UTC m=+40.552733270" Mar 7 01:33:35.370536 kubelet[2709]: I0307 01:33:35.369541 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:35.399655 systemd[1]: run-containerd-runc-k8s.io-46af84f7c6d074051a120c7d7fd2c6d1641c6bbc7b3c2600427d4c1d5d991f07-runc.USaRwg.mount: Deactivated successfully. Mar 7 01:33:35.469868 kubelet[2709]: I0307 01:33:35.469785 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-744b5f585-4xgb5" podStartSLOduration=7.134447732 podStartE2EDuration="16.469740386s" podCreationTimestamp="2026-03-07 01:33:19 +0000 UTC" firstStartedPulling="2026-03-07 01:33:20.879690628 +0000 UTC m=+30.177003621" lastFinishedPulling="2026-03-07 01:33:30.214983282 +0000 UTC m=+39.512296275" observedRunningTime="2026-03-07 01:33:31.259452928 +0000 UTC m=+40.556765951" watchObservedRunningTime="2026-03-07 01:33:35.469740386 +0000 UTC m=+44.767053409" Mar 7 01:33:35.495044 kubelet[2709]: I0307 01:33:35.494571 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:36.210320 systemd[1]: run-containerd-runc-k8s.io-739a81f89d0fb4cefd3d11f48cb86fe1c9623d93fe75b004fe817520352f2666-runc.n7ARrz.mount: Deactivated successfully. Mar 7 01:33:50.791310 containerd[1558]: time="2026-03-07T01:33:50.791261583Z" level=info msg="StopPodSandbox for \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\"" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.837 [WARNING][5572] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"b74ff1db-535d-4184-94d2-59a40d15c8c9", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434", Pod:"calico-apiserver-5966b74f89-mgnnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali441f30456c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.837 [INFO][5572] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.838 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" iface="eth0" netns="" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.838 [INFO][5572] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.838 [INFO][5572] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.861 [INFO][5582] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.861 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.861 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.870 [WARNING][5582] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.870 [INFO][5582] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.872 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:50.878388 containerd[1558]: 2026-03-07 01:33:50.874 [INFO][5572] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.878388 containerd[1558]: time="2026-03-07T01:33:50.878270017Z" level=info msg="TearDown network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\" successfully" Mar 7 01:33:50.878388 containerd[1558]: time="2026-03-07T01:33:50.878293377Z" level=info msg="StopPodSandbox for \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\" returns successfully" Mar 7 01:33:50.880899 containerd[1558]: time="2026-03-07T01:33:50.878961557Z" level=info msg="RemovePodSandbox for \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\"" Mar 7 01:33:50.880899 containerd[1558]: time="2026-03-07T01:33:50.878999577Z" level=info msg="Forcibly stopping sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\"" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.938 [WARNING][5597] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"b74ff1db-535d-4184-94d2-59a40d15c8c9", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"051be7bd981458008dde1252afe6863e922085a18febb03d7fd7555d84593434", Pod:"calico-apiserver-5966b74f89-mgnnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali441f30456c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.938 [INFO][5597] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.938 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" iface="eth0" netns="" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.938 [INFO][5597] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.938 [INFO][5597] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.978 [INFO][5604] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.978 [INFO][5604] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.978 [INFO][5604] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.985 [WARNING][5604] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.985 [INFO][5604] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" HandleID="k8s-pod-network.a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--mgnnz-eth0" Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.986 [INFO][5604] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:50.993104 containerd[1558]: 2026-03-07 01:33:50.989 [INFO][5597] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a" Mar 7 01:33:50.993496 containerd[1558]: time="2026-03-07T01:33:50.993169761Z" level=info msg="TearDown network for sandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\" successfully" Mar 7 01:33:51.001713 containerd[1558]: time="2026-03-07T01:33:51.001671804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:33:51.001773 containerd[1558]: time="2026-03-07T01:33:51.001743244Z" level=info msg="RemovePodSandbox \"a00af7e0bb57a02764a2c89c021e1ddddea1896103651b1b8f0fb0115409ed9a\" returns successfully" Mar 7 01:33:51.002479 containerd[1558]: time="2026-03-07T01:33:51.002457255Z" level=info msg="StopPodSandbox for \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\"" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.050 [WARNING][5619] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe0fc753-6131-4c82-a147-0fb13afc44d9", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008", Pod:"coredns-674b8bbfcf-lm86t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35952829b2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.050 [INFO][5619] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.051 [INFO][5619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" iface="eth0" netns="" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.051 [INFO][5619] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.051 [INFO][5619] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.083 [INFO][5626] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.084 [INFO][5626] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.084 [INFO][5626] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.092 [WARNING][5626] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.092 [INFO][5626] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.093 [INFO][5626] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:51.104628 containerd[1558]: 2026-03-07 01:33:51.099 [INFO][5619] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.106029 containerd[1558]: time="2026-03-07T01:33:51.104580426Z" level=info msg="TearDown network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\" successfully" Mar 7 01:33:51.106029 containerd[1558]: time="2026-03-07T01:33:51.105391116Z" level=info msg="StopPodSandbox for \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\" returns successfully" Mar 7 01:33:51.106307 containerd[1558]: time="2026-03-07T01:33:51.106282246Z" level=info msg="RemovePodSandbox for \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\"" Mar 7 01:33:51.106343 containerd[1558]: time="2026-03-07T01:33:51.106311016Z" level=info msg="Forcibly stopping sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\"" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.204 [WARNING][5641] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe0fc753-6131-4c82-a147-0fb13afc44d9", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"316e8d4f380734805dcfe4a261c5817138cb9f4a2abf99bbebe921f1ec54f008", Pod:"coredns-674b8bbfcf-lm86t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35952829b2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.204 [INFO][5641] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.204 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" iface="eth0" netns="" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.204 [INFO][5641] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.204 [INFO][5641] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.281 [INFO][5669] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.281 [INFO][5669] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.281 [INFO][5669] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.293 [WARNING][5669] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.293 [INFO][5669] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" HandleID="k8s-pod-network.b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--lm86t-eth0" Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.305 [INFO][5669] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:51.324110 containerd[1558]: 2026-03-07 01:33:51.314 [INFO][5641] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932" Mar 7 01:33:51.324110 containerd[1558]: time="2026-03-07T01:33:51.319310860Z" level=info msg="TearDown network for sandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\" successfully" Mar 7 01:33:51.327049 containerd[1558]: time="2026-03-07T01:33:51.327020903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:33:51.327352 containerd[1558]: time="2026-03-07T01:33:51.327222393Z" level=info msg="RemovePodSandbox \"b1fe2ff35ee2c358799a2538e72511797b0865768f9141aece882b642b879932\" returns successfully" Mar 7 01:33:51.328274 containerd[1558]: time="2026-03-07T01:33:51.328251753Z" level=info msg="StopPodSandbox for \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\"" Mar 7 01:33:51.352149 containerd[1558]: time="2026-03-07T01:33:51.352121043Z" level=info msg="StopContainer for \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\" with timeout 5 (s)" Mar 7 01:33:51.352681 containerd[1558]: time="2026-03-07T01:33:51.352664404Z" level=info msg="Stop container \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\" with signal terminated" Mar 7 01:33:51.469580 containerd[1558]: time="2026-03-07T01:33:51.469462679Z" level=info msg="shim disconnected" id=6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9 namespace=k8s.io Mar 7 01:33:51.471138 containerd[1558]: time="2026-03-07T01:33:51.469715729Z" level=warning msg="cleaning up after shim disconnected" id=6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9 namespace=k8s.io Mar 7 01:33:51.471138 containerd[1558]: time="2026-03-07T01:33:51.469730579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:51.474604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9-rootfs.mount: Deactivated successfully. Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.386 [WARNING][5685] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0", GenerateName:"calico-kube-controllers-666c98579-", Namespace:"calico-system", SelfLink:"", UID:"ff997689-ed72-4f2b-ad6a-78b32cbaabf3", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666c98579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81", Pod:"calico-kube-controllers-666c98579-qxmzh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1c2353176f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.387 [INFO][5685] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.387 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" iface="eth0" netns="" Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.387 [INFO][5685] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.387 [INFO][5685] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.450 [INFO][5698] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.450 [INFO][5698] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.450 [INFO][5698] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.457 [WARNING][5698] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.457 [INFO][5698] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.460 [INFO][5698] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:51.481497 containerd[1558]: 2026-03-07 01:33:51.475 [INFO][5685] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.481497 containerd[1558]: time="2026-03-07T01:33:51.481350384Z" level=info msg="TearDown network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\" successfully" Mar 7 01:33:51.481497 containerd[1558]: time="2026-03-07T01:33:51.481390264Z" level=info msg="StopPodSandbox for \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\" returns successfully" Mar 7 01:33:51.483202 containerd[1558]: time="2026-03-07T01:33:51.483185835Z" level=info msg="RemovePodSandbox for \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\"" Mar 7 01:33:51.483615 containerd[1558]: time="2026-03-07T01:33:51.483366445Z" level=info msg="Forcibly stopping sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\"" Mar 7 01:33:51.587926 containerd[1558]: time="2026-03-07T01:33:51.587869016Z" level=info msg="StopContainer for \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\" returns successfully" Mar 7 01:33:51.593481 containerd[1558]: time="2026-03-07T01:33:51.593451378Z" level=info msg="StopPodSandbox for \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\"" Mar 7 01:33:51.593554 containerd[1558]: time="2026-03-07T01:33:51.593493648Z" level=info msg="Container to stop \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:33:51.593554 containerd[1558]: time="2026-03-07T01:33:51.593506158Z" level=info msg="Container to stop \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:33:51.593554 containerd[1558]: time="2026-03-07T01:33:51.593515778Z" level=info msg="Container to stop \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:33:51.593554 containerd[1558]: time="2026-03-07T01:33:51.593524238Z" level=info msg="Container to stop \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:33:51.601827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd-shm.mount: Deactivated successfully. Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.557 [WARNING][5740] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0", GenerateName:"calico-kube-controllers-666c98579-", Namespace:"calico-system", SelfLink:"", UID:"ff997689-ed72-4f2b-ad6a-78b32cbaabf3", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666c98579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"6521910ca081baad877f6607a25f9a071910b53f3f60d6f78a8a05eb830e3a81", Pod:"calico-kube-controllers-666c98579-qxmzh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1c2353176f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.559 [INFO][5740] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.559 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" iface="eth0" netns="" Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.559 [INFO][5740] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.559 [INFO][5740] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.588 [INFO][5747] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.588 [INFO][5747] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.588 [INFO][5747] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.597 [WARNING][5747] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.597 [INFO][5747] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" HandleID="k8s-pod-network.80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Workload="172--238--171--132-k8s-calico--kube--controllers--666c98579--qxmzh-eth0" Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.599 [INFO][5747] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:51.611292 containerd[1558]: 2026-03-07 01:33:51.604 [INFO][5740] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80" Mar 7 01:33:51.611636 containerd[1558]: time="2026-03-07T01:33:51.611316275Z" level=info msg="TearDown network for sandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\" successfully" Mar 7 01:33:51.618396 containerd[1558]: time="2026-03-07T01:33:51.618367908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:33:51.618462 containerd[1558]: time="2026-03-07T01:33:51.618426608Z" level=info msg="RemovePodSandbox \"80640b76daa89af16648164251d2c74e46d576afebb2494a321a39a3ffd2df80\" returns successfully" Mar 7 01:33:51.618875 containerd[1558]: time="2026-03-07T01:33:51.618853939Z" level=info msg="StopPodSandbox for \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\"" Mar 7 01:33:51.646528 containerd[1558]: time="2026-03-07T01:33:51.646347679Z" level=info msg="shim disconnected" id=121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd namespace=k8s.io Mar 7 01:33:51.646528 containerd[1558]: time="2026-03-07T01:33:51.646406249Z" level=warning msg="cleaning up after shim disconnected" id=121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd namespace=k8s.io Mar 7 01:33:51.646528 containerd[1558]: time="2026-03-07T01:33:51.646415399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:51.656684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd-rootfs.mount: Deactivated successfully. Mar 7 01:33:51.686324 containerd[1558]: time="2026-03-07T01:33:51.686172365Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:33:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:33:51.698934 containerd[1558]: time="2026-03-07T01:33:51.698499249Z" level=info msg="TearDown network for sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" successfully" Mar 7 01:33:51.700597 containerd[1558]: time="2026-03-07T01:33:51.700527780Z" level=info msg="StopPodSandbox for \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" returns successfully" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.690 [WARNING][5775] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"82ecbaaa-38d2-47ca-8766-21b7ed9556a7", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2", Pod:"goldmane-5b85766d88-dxh9d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1411b8d4c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.690 [INFO][5775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.690 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" iface="eth0" netns="" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.690 [INFO][5775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.690 [INFO][5775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.721 [INFO][5804] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.721 [INFO][5804] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.721 [INFO][5804] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.729 [WARNING][5804] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.729 [INFO][5804] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.733 [INFO][5804] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:51.746463 containerd[1558]: 2026-03-07 01:33:51.736 [INFO][5775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.746463 containerd[1558]: time="2026-03-07T01:33:51.746366548Z" level=info msg="TearDown network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\" successfully" Mar 7 01:33:51.746463 containerd[1558]: time="2026-03-07T01:33:51.746388888Z" level=info msg="StopPodSandbox for \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\" returns successfully" Mar 7 01:33:51.752028 containerd[1558]: time="2026-03-07T01:33:51.750504750Z" level=info msg="RemovePodSandbox for \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\"" Mar 7 01:33:51.752028 containerd[1558]: time="2026-03-07T01:33:51.750533210Z" level=info msg="Forcibly stopping sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\"" Mar 7 01:33:51.809832 kubelet[2709]: I0307 01:33:51.809801 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-lib-calico\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.810563 kubelet[2709]: I0307 01:33:51.810319 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-nodeproc\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.810563 kubelet[2709]: I0307 01:33:51.810338 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-lib-modules\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.810563 kubelet[2709]: I0307 01:33:51.810258 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.810563 kubelet[2709]: I0307 01:33:51.810390 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-nodeproc" (OuterVolumeSpecName: "nodeproc") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "nodeproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.810563 kubelet[2709]: I0307 01:33:51.810421 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjnt2\" (UniqueName: \"kubernetes.io/projected/950033ed-d8e0-41bd-bd2f-73e016c04f0e-kube-api-access-vjnt2\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.810563 kubelet[2709]: I0307 01:33:51.810440 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-net-dir\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.811030 kubelet[2709]: I0307 01:33:51.810731 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.812031 kubelet[2709]: I0307 01:33:51.811654 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-sys-fs\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812031 kubelet[2709]: I0307 01:33:51.811675 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-bpffs\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812031 kubelet[2709]: I0307 01:33:51.811695 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/950033ed-d8e0-41bd-bd2f-73e016c04f0e-node-certs\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812031 kubelet[2709]: I0307 01:33:51.811708 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-bin-dir\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812031 kubelet[2709]: I0307 01:33:51.811720 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-policysync\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812031 kubelet[2709]: I0307 01:33:51.811734 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-flexvol-driver-host\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812191 kubelet[2709]: I0307 01:33:51.811746 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-xtables-lock\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812191 kubelet[2709]: I0307 01:33:51.811762 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/950033ed-d8e0-41bd-bd2f-73e016c04f0e-tigera-ca-bundle\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812191 kubelet[2709]: I0307 01:33:51.811778 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-run-calico\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812191 kubelet[2709]: I0307 01:33:51.811790 2709 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-log-dir\") pod \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\" (UID: \"950033ed-d8e0-41bd-bd2f-73e016c04f0e\") " Mar 7 01:33:51.812930 kubelet[2709]: I0307 01:33:51.812786 2709 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-lib-calico\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.812930 kubelet[2709]: I0307 01:33:51.812804 2709 reconciler_common.go:299] "Volume detached for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-nodeproc\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.812930 kubelet[2709]: I0307 01:33:51.812813 2709 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-lib-modules\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.812930 kubelet[2709]: I0307 01:33:51.812839 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.812930 kubelet[2709]: I0307 01:33:51.812862 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.812930 kubelet[2709]: I0307 01:33:51.812879 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-sys-fs" (OuterVolumeSpecName: "sys-fs") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "sys-fs". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.813265 kubelet[2709]: I0307 01:33:51.812895 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-bpffs" (OuterVolumeSpecName: "bpffs") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.815243 kubelet[2709]: I0307 01:33:51.814964 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.818811 kubelet[2709]: I0307 01:33:51.816756 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-policysync" (OuterVolumeSpecName: "policysync") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.818811 kubelet[2709]: I0307 01:33:51.816772 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.818811 kubelet[2709]: I0307 01:33:51.816804 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.818811 kubelet[2709]: I0307 01:33:51.816825 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:33:51.827602 systemd[1]: var-lib-kubelet-pods-950033ed\x2dd8e0\x2d41bd\x2dbd2f\x2d73e016c04f0e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 7 01:33:51.833751 kubelet[2709]: I0307 01:33:51.833723 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950033ed-d8e0-41bd-bd2f-73e016c04f0e-node-certs" (OuterVolumeSpecName: "node-certs") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:33:51.834165 kubelet[2709]: I0307 01:33:51.834148 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950033ed-d8e0-41bd-bd2f-73e016c04f0e-kube-api-access-vjnt2" (OuterVolumeSpecName: "kube-api-access-vjnt2") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "kube-api-access-vjnt2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:33:51.853539 kubelet[2709]: I0307 01:33:51.849927 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950033ed-d8e0-41bd-bd2f-73e016c04f0e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "950033ed-d8e0-41bd-bd2f-73e016c04f0e" (UID: "950033ed-d8e0-41bd-bd2f-73e016c04f0e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.822 [WARNING][5819] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"82ecbaaa-38d2-47ca-8766-21b7ed9556a7", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"bfb5a58846230e47efb41952ce9582a6f3cd2e05ab49c4d4a415bf9fd8c575c2", Pod:"goldmane-5b85766d88-dxh9d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic1411b8d4c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.822 [INFO][5819] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.822 [INFO][5819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" iface="eth0" netns="" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.822 [INFO][5819] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.822 [INFO][5819] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.869 [INFO][5829] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.869 [INFO][5829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.869 [INFO][5829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.878 [WARNING][5829] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.878 [INFO][5829] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" HandleID="k8s-pod-network.8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Workload="172--238--171--132-k8s-goldmane--5b85766d88--dxh9d-eth0" Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.879 [INFO][5829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:51.883816 containerd[1558]: 2026-03-07 01:33:51.881 [INFO][5819] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b" Mar 7 01:33:51.884816 containerd[1558]: time="2026-03-07T01:33:51.883870513Z" level=info msg="TearDown network for sandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\" successfully" Mar 7 01:33:51.889585 containerd[1558]: time="2026-03-07T01:33:51.889373015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:33:51.889659 containerd[1558]: time="2026-03-07T01:33:51.889641745Z" level=info msg="RemovePodSandbox \"8d9721fd30def53d0aeec7f071e4775cad37910bf3a6bd727193f1a34cef903b\" returns successfully" Mar 7 01:33:51.890403 containerd[1558]: time="2026-03-07T01:33:51.890383465Z" level=info msg="StopPodSandbox for \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\"" Mar 7 01:33:51.914925 kubelet[2709]: I0307 01:33:51.913666 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-lib-modules\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.914925 kubelet[2709]: I0307 01:33:51.913702 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-policysync\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.914925 kubelet[2709]: I0307 01:33:51.913720 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-flexvol-driver-host\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.914925 kubelet[2709]: I0307 01:33:51.913737 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-sys-fs\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.914925 kubelet[2709]: I0307 01:33:51.913751 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-cni-net-dir\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.915167 kubelet[2709]: I0307 01:33:51.913764 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09d331c9-fa22-437c-be7d-1279484fac8c-tigera-ca-bundle\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.915167 kubelet[2709]: I0307 01:33:51.913777 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgcg\" (UniqueName: \"kubernetes.io/projected/09d331c9-fa22-437c-be7d-1279484fac8c-kube-api-access-qlgcg\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.916919 kubelet[2709]: I0307 01:33:51.916021 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/09d331c9-fa22-437c-be7d-1279484fac8c-node-certs\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.916919 kubelet[2709]: I0307 01:33:51.916067 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-nodeproc\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.916919 kubelet[2709]: I0307 01:33:51.916209 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-xtables-lock\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.916919 kubelet[2709]: I0307 01:33:51.916231 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-bpffs\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.919300 kubelet[2709]: I0307 01:33:51.918944 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-cni-bin-dir\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.919300 kubelet[2709]: I0307 01:33:51.918962 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-cni-log-dir\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.919300 kubelet[2709]: I0307 01:33:51.918977 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-var-lib-calico\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.919300 kubelet[2709]: I0307 01:33:51.919011 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/09d331c9-fa22-437c-be7d-1279484fac8c-var-run-calico\") pod \"calico-node-sbt89\" (UID: \"09d331c9-fa22-437c-be7d-1279484fac8c\") " pod="calico-system/calico-node-sbt89" Mar 7 01:33:51.919300 kubelet[2709]: I0307 01:33:51.919178 2709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vjnt2\" (UniqueName: \"kubernetes.io/projected/950033ed-d8e0-41bd-bd2f-73e016c04f0e-kube-api-access-vjnt2\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.919300 kubelet[2709]: I0307 01:33:51.919192 2709 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-net-dir\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.919423 kubelet[2709]: I0307 01:33:51.919202 2709 reconciler_common.go:299] "Volume detached for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-sys-fs\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.919423 kubelet[2709]: I0307 01:33:51.919210 2709 reconciler_common.go:299] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-bpffs\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.919423 kubelet[2709]: I0307 01:33:51.919218 2709 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/950033ed-d8e0-41bd-bd2f-73e016c04f0e-node-certs\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.920390 kubelet[2709]: I0307 01:33:51.919225 2709 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-bin-dir\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.920390 kubelet[2709]: I0307 01:33:51.920311 2709 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-policysync\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.920390 kubelet[2709]: I0307 01:33:51.920321 2709 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-flexvol-driver-host\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.920390 kubelet[2709]: I0307 01:33:51.920330 2709 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-xtables-lock\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.920390 kubelet[2709]: I0307 01:33:51.920339 2709 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/950033ed-d8e0-41bd-bd2f-73e016c04f0e-tigera-ca-bundle\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.920390 kubelet[2709]: I0307 01:33:51.920346 2709 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-var-run-calico\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.920390 kubelet[2709]: I0307 01:33:51.920373 2709 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/950033ed-d8e0-41bd-bd2f-73e016c04f0e-cni-log-dir\") on node \"172-238-171-132\" DevicePath \"\"" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.931 [WARNING][5843] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" WorkloadEndpoint="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.932 [INFO][5843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.932 [INFO][5843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" iface="eth0" netns="" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.932 [INFO][5843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.932 [INFO][5843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.962 [INFO][5850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.962 [INFO][5850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.962 [INFO][5850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.967 [WARNING][5850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.967 [INFO][5850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.970 [INFO][5850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:51.976539 containerd[1558]: 2026-03-07 01:33:51.973 [INFO][5843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:51.976993 containerd[1558]: time="2026-03-07T01:33:51.976599979Z" level=info msg="TearDown network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\" successfully" Mar 7 01:33:51.976993 containerd[1558]: time="2026-03-07T01:33:51.976666019Z" level=info msg="StopPodSandbox for \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\" returns successfully" Mar 7 01:33:51.977346 containerd[1558]: time="2026-03-07T01:33:51.977326290Z" level=info msg="RemovePodSandbox for \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\"" Mar 7 01:33:51.977384 containerd[1558]: time="2026-03-07T01:33:51.977352260Z" level=info msg="Forcibly stopping sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\"" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.017 [WARNING][5865] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" WorkloadEndpoint="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.018 [INFO][5865] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.018 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" iface="eth0" netns="" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.018 [INFO][5865] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.018 [INFO][5865] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.057 [INFO][5872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.057 [INFO][5872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.057 [INFO][5872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.063 [WARNING][5872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.063 [INFO][5872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" HandleID="k8s-pod-network.0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Workload="172--238--171--132-k8s-whisker--6d785d65b8--24w74-eth0" Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.065 [INFO][5872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:52.070120 containerd[1558]: 2026-03-07 01:33:52.067 [INFO][5865] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199" Mar 7 01:33:52.070637 containerd[1558]: time="2026-03-07T01:33:52.070346807Z" level=info msg="TearDown network for sandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\" successfully" Mar 7 01:33:52.074872 containerd[1558]: time="2026-03-07T01:33:52.074648458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:33:52.074872 containerd[1558]: time="2026-03-07T01:33:52.074735818Z" level=info msg="RemovePodSandbox \"0e4aa14cc84ca41bd22cd10f241876fa4dfaa33e0b46f2dcaaee1d919d9f8199\" returns successfully" Mar 7 01:33:52.075771 containerd[1558]: time="2026-03-07T01:33:52.075503759Z" level=info msg="StopPodSandbox for \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\"" Mar 7 01:33:52.094031 containerd[1558]: time="2026-03-07T01:33:52.093104607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sbt89,Uid:09d331c9-fa22-437c-be7d-1279484fac8c,Namespace:calico-system,Attempt:0,}" Mar 7 01:33:52.154297 containerd[1558]: time="2026-03-07T01:33:52.145296367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:33:52.154297 containerd[1558]: time="2026-03-07T01:33:52.145346087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:33:52.154297 containerd[1558]: time="2026-03-07T01:33:52.145359507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:52.154297 containerd[1558]: time="2026-03-07T01:33:52.145441627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:33:52.154498 systemd[1]: var-lib-kubelet-pods-950033ed\x2dd8e0\x2d41bd\x2dbd2f\x2d73e016c04f0e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-7.mount: Deactivated successfully. Mar 7 01:33:52.154684 systemd[1]: var-lib-kubelet-pods-950033ed\x2dd8e0\x2d41bd\x2dbd2f\x2d73e016c04f0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvjnt2.mount: Deactivated successfully. Mar 7 01:33:52.214611 systemd[1]: run-containerd-runc-k8s.io-ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d-runc.HjmNjy.mount: Deactivated successfully. Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.132 [WARNING][5889] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343", Pod:"coredns-674b8bbfcf-vdbpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b02ce7dbae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.133 [INFO][5889] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.133 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" iface="eth0" netns="" Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.133 [INFO][5889] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.133 [INFO][5889] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.229 [INFO][5914] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.229 [INFO][5914] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.230 [INFO][5914] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.237 [WARNING][5914] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.237 [INFO][5914] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.238 [INFO][5914] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:52.243855 containerd[1558]: 2026-03-07 01:33:52.241 [INFO][5889] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.244808 containerd[1558]: time="2026-03-07T01:33:52.244676287Z" level=info msg="TearDown network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\" successfully" Mar 7 01:33:52.244808 containerd[1558]: time="2026-03-07T01:33:52.244702917Z" level=info msg="StopPodSandbox for \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\" returns successfully" Mar 7 01:33:52.250430 containerd[1558]: time="2026-03-07T01:33:52.250187959Z" level=info msg="RemovePodSandbox for \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\"" Mar 7 01:33:52.250430 containerd[1558]: time="2026-03-07T01:33:52.250215249Z" level=info msg="Forcibly stopping sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\"" Mar 7 01:33:52.272933 containerd[1558]: time="2026-03-07T01:33:52.272876168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sbt89,Uid:09d331c9-fa22-437c-be7d-1279484fac8c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\"" Mar 7 01:33:52.280791 containerd[1558]: time="2026-03-07T01:33:52.280431691Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:33:52.328747 containerd[1558]: time="2026-03-07T01:33:52.328252700Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d431e9c8ef20c1e04d01845a2da8ecfd39e4bea2842df350b35fed5becaf052c\"" Mar 7 01:33:52.329460 containerd[1558]: time="2026-03-07T01:33:52.329403921Z" level=info msg="StartContainer for \"d431e9c8ef20c1e04d01845a2da8ecfd39e4bea2842df350b35fed5becaf052c\"" Mar 7 01:33:52.334092 kubelet[2709]: I0307 01:33:52.332746 2709 scope.go:117] "RemoveContainer" containerID="6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9" Mar 7 01:33:52.338101 containerd[1558]: time="2026-03-07T01:33:52.338079915Z" level=info msg="RemoveContainer for \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\"" Mar 7 01:33:52.354014 containerd[1558]: time="2026-03-07T01:33:52.353984321Z" level=info msg="RemoveContainer for \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\" returns successfully" Mar 7 01:33:52.354364 kubelet[2709]: I0307 01:33:52.354350 2709 scope.go:117] "RemoveContainer" containerID="ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9" Mar 7 01:33:52.355398 containerd[1558]: time="2026-03-07T01:33:52.355381112Z" level=info msg="RemoveContainer for \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\"" Mar 7 01:33:52.367394 containerd[1558]: time="2026-03-07T01:33:52.367369317Z" level=info msg="RemoveContainer for \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\" returns successfully" Mar 7 01:33:52.367864 kubelet[2709]: I0307 01:33:52.367845 2709 scope.go:117] "RemoveContainer" containerID="75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6" Mar 7 01:33:52.369293 containerd[1558]: time="2026-03-07T01:33:52.369245697Z" level=info msg="RemoveContainer for \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\"" Mar 7 01:33:52.373867 containerd[1558]: time="2026-03-07T01:33:52.373846379Z" level=info msg="RemoveContainer for \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\" returns successfully" Mar 7 01:33:52.375663 kubelet[2709]: I0307 01:33:52.375648 2709 scope.go:117] "RemoveContainer" containerID="e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677" Mar 7 01:33:52.376874 containerd[1558]: time="2026-03-07T01:33:52.376856460Z" level=info msg="RemoveContainer for \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\"" Mar 7 01:33:52.384452 containerd[1558]: time="2026-03-07T01:33:52.384433443Z" level=info msg="RemoveContainer for \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\" returns successfully" Mar 7 01:33:52.384649 kubelet[2709]: I0307 01:33:52.384634 2709 scope.go:117] "RemoveContainer" containerID="6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9" Mar 7 01:33:52.384977 containerd[1558]: time="2026-03-07T01:33:52.384885533Z" level=error msg="ContainerStatus for \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\": not found" Mar 7 01:33:52.386261 kubelet[2709]: E0307 01:33:52.386003 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\": not found" containerID="6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9" Mar 7 01:33:52.386261 kubelet[2709]: I0307 01:33:52.386027 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9"} err="failed to get container status \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ee2bf8529e70d41a595456a7c4984ccb7ffb60c4dda5821917fa1a549d1c2f9\": not found" Mar 7 01:33:52.386261 kubelet[2709]: I0307 01:33:52.386061 2709 scope.go:117] "RemoveContainer" containerID="ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9" Mar 7 01:33:52.386551 containerd[1558]: time="2026-03-07T01:33:52.386216634Z" level=error msg="ContainerStatus for \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\": not found" Mar 7 01:33:52.387038 kubelet[2709]: E0307 01:33:52.386719 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\": not found" containerID="ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9" Mar 7 01:33:52.387038 kubelet[2709]: I0307 01:33:52.386765 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9"} err="failed to get container status \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea473cc9d0fb59fdbcc667b2d99dceff254654c8951a0a4be53432d9ef5191c9\": not found" Mar 7 01:33:52.387038 kubelet[2709]: I0307 01:33:52.386784 2709 scope.go:117] "RemoveContainer" containerID="75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6" Mar 7 01:33:52.387407 containerd[1558]: time="2026-03-07T01:33:52.387244905Z" level=error msg="ContainerStatus for \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\": not found" Mar 7 01:33:52.387453 kubelet[2709]: E0307 01:33:52.387334 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\": not found" containerID="75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6" Mar 7 01:33:52.387453 kubelet[2709]: I0307 01:33:52.387351 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6"} err="failed to get container status \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"75b13db193d89e38dd82c88a2c59591ba801ca077a17ea6fb2d1a787a544a5c6\": not found" Mar 7 01:33:52.387453 kubelet[2709]: I0307 01:33:52.387363 2709 scope.go:117] "RemoveContainer" containerID="e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677" Mar 7 01:33:52.387816 containerd[1558]: time="2026-03-07T01:33:52.387643575Z" level=error msg="ContainerStatus for \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\": not found" Mar 7 01:33:52.387861 kubelet[2709]: E0307 01:33:52.387750 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\": not found" containerID="e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677" Mar 7 01:33:52.387861 kubelet[2709]: I0307 01:33:52.387794 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677"} err="failed to get container status \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\": rpc error: code = NotFound desc = an error occurred when try to find container \"e04c09b503d36b5492d66f1c68ef1367f1adbe540c4260abb086ebe744558677\": not found" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.374 [WARNING][5947] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab579b3b-b7c0-44e4-9f7d-388d9a61e9ba", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"d13fb20d5ba00b36091d0b96dcdbf3281e5a2b2541fa8bf1f8463aa8a6ef2343", Pod:"coredns-674b8bbfcf-vdbpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b02ce7dbae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.378 [INFO][5947] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.378 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" iface="eth0" netns="" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.379 [INFO][5947] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.379 [INFO][5947] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.430 [INFO][5969] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.431 [INFO][5969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.431 [INFO][5969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.443 [WARNING][5969] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.443 [INFO][5969] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" HandleID="k8s-pod-network.c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Workload="172--238--171--132-k8s-coredns--674b8bbfcf--vdbpr-eth0" Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.445 [INFO][5969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:52.451245 containerd[1558]: 2026-03-07 01:33:52.447 [INFO][5947] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff" Mar 7 01:33:52.451768 containerd[1558]: time="2026-03-07T01:33:52.451698540Z" level=info msg="TearDown network for sandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\" successfully" Mar 7 01:33:52.456219 containerd[1558]: time="2026-03-07T01:33:52.456161852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:33:52.456375 containerd[1558]: time="2026-03-07T01:33:52.456320332Z" level=info msg="RemovePodSandbox \"c44493925b1d3bf45cf447de841102d181e3a6503f1094e0ad86588fbcb5f9ff\" returns successfully" Mar 7 01:33:52.457553 containerd[1558]: time="2026-03-07T01:33:52.457323773Z" level=info msg="StopPodSandbox for \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\"" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.511 [WARNING][5997] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"a05ad463-2b61-48be-ab34-432b9b18b36f", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84", Pod:"calico-apiserver-5966b74f89-6sjb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0eda48199d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.512 [INFO][5997] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.512 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" iface="eth0" netns="" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.512 [INFO][5997] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.512 [INFO][5997] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.535 [INFO][6004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.535 [INFO][6004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.535 [INFO][6004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.540 [WARNING][6004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.540 [INFO][6004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.542 [INFO][6004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:52.556564 containerd[1558]: 2026-03-07 01:33:52.548 [INFO][5997] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.557260 containerd[1558]: time="2026-03-07T01:33:52.556998082Z" level=info msg="TearDown network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\" successfully" Mar 7 01:33:52.557260 containerd[1558]: time="2026-03-07T01:33:52.557022772Z" level=info msg="StopPodSandbox for \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\" returns successfully" Mar 7 01:33:52.558088 containerd[1558]: time="2026-03-07T01:33:52.557831143Z" level=info msg="RemovePodSandbox for \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\"" Mar 7 01:33:52.558088 containerd[1558]: time="2026-03-07T01:33:52.557853333Z" level=info msg="Forcibly stopping sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\"" Mar 7 01:33:52.581669 containerd[1558]: time="2026-03-07T01:33:52.581567813Z" level=info msg="StartContainer for \"d431e9c8ef20c1e04d01845a2da8ecfd39e4bea2842df350b35fed5becaf052c\" returns successfully" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.605 [WARNING][6024] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0", GenerateName:"calico-apiserver-5966b74f89-", Namespace:"calico-system", SelfLink:"", UID:"a05ad463-2b61-48be-ab34-432b9b18b36f", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 33, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5966b74f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-132", ContainerID:"0203b12f474f3366996c230bbc2d652f59b459fa563bc60f9f9c8e1452bd7e84", Pod:"calico-apiserver-5966b74f89-6sjb8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0eda48199d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.606 [INFO][6024] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.606 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" iface="eth0" netns="" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.606 [INFO][6024] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.606 [INFO][6024] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.631 [INFO][6033] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.631 [INFO][6033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.631 [INFO][6033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.647 [WARNING][6033] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.647 [INFO][6033] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" HandleID="k8s-pod-network.525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Workload="172--238--171--132-k8s-calico--apiserver--5966b74f89--6sjb8-eth0" Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.648 [INFO][6033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:33:52.656470 containerd[1558]: 2026-03-07 01:33:52.652 [INFO][6024] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7" Mar 7 01:33:52.656470 containerd[1558]: time="2026-03-07T01:33:52.656378592Z" level=info msg="TearDown network for sandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\" successfully" Mar 7 01:33:52.660688 containerd[1558]: time="2026-03-07T01:33:52.660404874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:33:52.660688 containerd[1558]: time="2026-03-07T01:33:52.660509954Z" level=info msg="RemovePodSandbox \"525b9ecdf6fb81566e00d4f766e6d67ad2803ecd50b1188485f51617c3e517c7\" returns successfully" Mar 7 01:33:52.704233 containerd[1558]: time="2026-03-07T01:33:52.704184952Z" level=info msg="shim disconnected" id=d431e9c8ef20c1e04d01845a2da8ecfd39e4bea2842df350b35fed5becaf052c namespace=k8s.io Mar 7 01:33:52.704565 containerd[1558]: time="2026-03-07T01:33:52.704434572Z" level=warning msg="cleaning up after shim disconnected" id=d431e9c8ef20c1e04d01845a2da8ecfd39e4bea2842df350b35fed5becaf052c namespace=k8s.io Mar 7 01:33:52.704565 containerd[1558]: time="2026-03-07T01:33:52.704447522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:52.724070 containerd[1558]: time="2026-03-07T01:33:52.724031160Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:33:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:33:52.814148 kubelet[2709]: I0307 01:33:52.813852 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950033ed-d8e0-41bd-bd2f-73e016c04f0e" path="/var/lib/kubelet/pods/950033ed-d8e0-41bd-bd2f-73e016c04f0e/volumes" Mar 7 01:33:53.149812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211100905.mount: Deactivated successfully. Mar 7 01:33:53.368895 containerd[1558]: time="2026-03-07T01:33:53.368855561Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:33:53.391688 containerd[1558]: time="2026-03-07T01:33:53.391479550Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b4b0490650715897a57ec79b7704d2b9336975801b6851c45f007df3560c9576\"" Mar 7 01:33:53.392487 containerd[1558]: time="2026-03-07T01:33:53.392063071Z" level=info msg="StartContainer for \"b4b0490650715897a57ec79b7704d2b9336975801b6851c45f007df3560c9576\"" Mar 7 01:33:53.497526 containerd[1558]: time="2026-03-07T01:33:53.495357592Z" level=info msg="StartContainer for \"b4b0490650715897a57ec79b7704d2b9336975801b6851c45f007df3560c9576\" returns successfully" Mar 7 01:33:53.568659 containerd[1558]: time="2026-03-07T01:33:53.568569232Z" level=info msg="shim disconnected" id=b4b0490650715897a57ec79b7704d2b9336975801b6851c45f007df3560c9576 namespace=k8s.io Mar 7 01:33:53.568659 containerd[1558]: time="2026-03-07T01:33:53.568643352Z" level=warning msg="cleaning up after shim disconnected" id=b4b0490650715897a57ec79b7704d2b9336975801b6851c45f007df3560c9576 namespace=k8s.io Mar 7 01:33:53.568659 containerd[1558]: time="2026-03-07T01:33:53.568654662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:53.596180 containerd[1558]: time="2026-03-07T01:33:53.594238683Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:33:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:33:54.148656 systemd[1]: run-containerd-runc-k8s.io-b4b0490650715897a57ec79b7704d2b9336975801b6851c45f007df3560c9576-runc.hs1Aje.mount: Deactivated successfully. Mar 7 01:33:54.148836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4b0490650715897a57ec79b7704d2b9336975801b6851c45f007df3560c9576-rootfs.mount: Deactivated successfully. Mar 7 01:33:54.372336 containerd[1558]: time="2026-03-07T01:33:54.372299501Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:33:54.387317 containerd[1558]: time="2026-03-07T01:33:54.386965748Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"efa1b745ea0ce6a37f568cff2b39d178addf8dfe53913a9a892ee5fe624229e9\"" Mar 7 01:33:54.388925 containerd[1558]: time="2026-03-07T01:33:54.388591789Z" level=info msg="StartContainer for \"efa1b745ea0ce6a37f568cff2b39d178addf8dfe53913a9a892ee5fe624229e9\"" Mar 7 01:33:54.471971 containerd[1558]: time="2026-03-07T01:33:54.471518132Z" level=info msg="StartContainer for \"efa1b745ea0ce6a37f568cff2b39d178addf8dfe53913a9a892ee5fe624229e9\" returns successfully" Mar 7 01:33:54.963293 containerd[1558]: time="2026-03-07T01:33:54.963239756Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Mar 7 01:33:54.994243 containerd[1558]: time="2026-03-07T01:33:54.994195428Z" level=info msg="shim disconnected" id=efa1b745ea0ce6a37f568cff2b39d178addf8dfe53913a9a892ee5fe624229e9 namespace=k8s.io Mar 7 01:33:54.994243 containerd[1558]: time="2026-03-07T01:33:54.994238948Z" level=warning msg="cleaning up after shim disconnected" id=efa1b745ea0ce6a37f568cff2b39d178addf8dfe53913a9a892ee5fe624229e9 namespace=k8s.io Mar 7 01:33:54.994243 containerd[1558]: time="2026-03-07T01:33:54.994248128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:33:55.151474 systemd[1]: run-containerd-runc-k8s.io-efa1b745ea0ce6a37f568cff2b39d178addf8dfe53913a9a892ee5fe624229e9-runc.jdQLhf.mount: Deactivated successfully. Mar 7 01:33:55.152463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efa1b745ea0ce6a37f568cff2b39d178addf8dfe53913a9a892ee5fe624229e9-rootfs.mount: Deactivated successfully. Mar 7 01:33:55.397768 containerd[1558]: time="2026-03-07T01:33:55.397727277Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:33:55.414866 containerd[1558]: time="2026-03-07T01:33:55.414826594Z" level=info msg="CreateContainer within sandbox \"ea6cd7353d4d40c9fe05f1d3f1cdf36aefc6592813cba73918761774ea94261d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"75dfdaa08ff192a819467bb74b27d8e696f375b1629c1b67895832225a9029e9\"" Mar 7 01:33:55.419260 containerd[1558]: time="2026-03-07T01:33:55.419130875Z" level=info msg="StartContainer for \"75dfdaa08ff192a819467bb74b27d8e696f375b1629c1b67895832225a9029e9\"" Mar 7 01:33:55.425213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509209236.mount: Deactivated successfully. Mar 7 01:33:55.505419 containerd[1558]: time="2026-03-07T01:33:55.505365512Z" level=info msg="StartContainer for \"75dfdaa08ff192a819467bb74b27d8e696f375b1629c1b67895832225a9029e9\" returns successfully" Mar 7 01:33:58.274990 kubelet[2709]: I0307 01:33:58.274930 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:33:58.291820 kubelet[2709]: I0307 01:33:58.291691 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sbt89" podStartSLOduration=7.291678038 podStartE2EDuration="7.291678038s" podCreationTimestamp="2026-03-07 01:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:33:56.40350541 +0000 UTC m=+65.700818443" watchObservedRunningTime="2026-03-07 01:33:58.291678038 +0000 UTC m=+67.588991031" Mar 7 01:34:02.332262 systemd[1]: run-containerd-runc-k8s.io-739a81f89d0fb4cefd3d11f48cb86fe1c9623d93fe75b004fe817520352f2666-runc.HR9jvS.mount: Deactivated successfully. Mar 7 01:34:02.523534 kubelet[2709]: I0307 01:34:02.523499 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:34:07.805701 kubelet[2709]: E0307 01:34:07.805662 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:34:12.806631 kubelet[2709]: E0307 01:34:12.805643 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:34:21.805460 kubelet[2709]: E0307 01:34:21.805392 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:34:27.805455 kubelet[2709]: E0307 01:34:27.805424 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:34:29.805400 kubelet[2709]: E0307 01:34:29.805264 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:34:29.805400 kubelet[2709]: E0307 01:34:29.805283 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:34:36.806679 kubelet[2709]: E0307 01:34:36.805761 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:34:46.388112 systemd[1]: Started sshd@7-172.238.171.132:22-68.220.241.50:38106.service - OpenSSH per-connection server daemon (68.220.241.50:38106). Mar 7 01:34:46.545610 sshd[6745]: Accepted publickey for core from 68.220.241.50 port 38106 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:34:46.547139 sshd[6745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:46.551634 systemd-logind[1538]: New session 8 of user core. Mar 7 01:34:46.556263 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:34:46.792402 sshd[6745]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:46.797118 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:34:46.798841 systemd[1]: sshd@7-172.238.171.132:22-68.220.241.50:38106.service: Deactivated successfully. Mar 7 01:34:46.803294 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:34:46.804533 systemd-logind[1538]: Removed session 8. Mar 7 01:34:51.836568 systemd[1]: Started sshd@8-172.238.171.132:22-68.220.241.50:38116.service - OpenSSH per-connection server daemon (68.220.241.50:38116). Mar 7 01:34:51.993651 sshd[6762]: Accepted publickey for core from 68.220.241.50 port 38116 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:34:51.997061 sshd[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:52.003036 systemd-logind[1538]: New session 9 of user core. Mar 7 01:34:52.013213 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:34:52.273207 sshd[6762]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:52.277757 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:34:52.279846 systemd[1]: sshd@8-172.238.171.132:22-68.220.241.50:38116.service: Deactivated successfully. Mar 7 01:34:52.285423 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:34:52.288152 systemd-logind[1538]: Removed session 9. Mar 7 01:34:52.665804 containerd[1558]: time="2026-03-07T01:34:52.665770100Z" level=info msg="StopPodSandbox for \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\"" Mar 7 01:34:52.666308 containerd[1558]: time="2026-03-07T01:34:52.665865320Z" level=info msg="TearDown network for sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" successfully" Mar 7 01:34:52.666308 containerd[1558]: time="2026-03-07T01:34:52.665876910Z" level=info msg="StopPodSandbox for \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" returns successfully" Mar 7 01:34:52.668164 containerd[1558]: time="2026-03-07T01:34:52.668139309Z" level=info msg="RemovePodSandbox for \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\"" Mar 7 01:34:52.668250 containerd[1558]: time="2026-03-07T01:34:52.668166349Z" level=info msg="Forcibly stopping sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\"" Mar 7 01:34:52.668250 containerd[1558]: time="2026-03-07T01:34:52.668215309Z" level=info msg="TearDown network for sandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" successfully" Mar 7 01:34:52.672784 containerd[1558]: time="2026-03-07T01:34:52.672664899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:34:52.672784 containerd[1558]: time="2026-03-07T01:34:52.672707728Z" level=info msg="RemovePodSandbox \"121a1cea0be0d3a164844da10301ce474d8b53b643d8b852dc836f0073672bbd\" returns successfully" Mar 7 01:34:57.299635 systemd[1]: Started sshd@9-172.238.171.132:22-68.220.241.50:42062.service - OpenSSH per-connection server daemon (68.220.241.50:42062). Mar 7 01:34:57.462958 sshd[6779]: Accepted publickey for core from 68.220.241.50 port 42062 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:34:57.465295 sshd[6779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:57.473234 systemd-logind[1538]: New session 10 of user core. Mar 7 01:34:57.479209 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:34:57.705980 sshd[6779]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:57.711071 systemd[1]: sshd@9-172.238.171.132:22-68.220.241.50:42062.service: Deactivated successfully. Mar 7 01:34:57.717079 systemd-logind[1538]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:34:57.717234 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:34:57.719295 systemd-logind[1538]: Removed session 10. Mar 7 01:34:57.734087 systemd[1]: Started sshd@10-172.238.171.132:22-68.220.241.50:42064.service - OpenSSH per-connection server daemon (68.220.241.50:42064). Mar 7 01:34:57.883544 sshd[6795]: Accepted publickey for core from 68.220.241.50 port 42064 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:34:57.888115 sshd[6795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:57.899762 systemd-logind[1538]: New session 11 of user core. Mar 7 01:34:57.905510 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:34:58.191655 sshd[6795]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:58.204282 systemd-logind[1538]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:34:58.204817 systemd[1]: sshd@10-172.238.171.132:22-68.220.241.50:42064.service: Deactivated successfully. Mar 7 01:34:58.213676 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:34:58.217169 systemd-logind[1538]: Removed session 11. Mar 7 01:34:58.222219 systemd[1]: Started sshd@11-172.238.171.132:22-68.220.241.50:42072.service - OpenSSH per-connection server daemon (68.220.241.50:42072). Mar 7 01:34:58.380481 sshd[6820]: Accepted publickey for core from 68.220.241.50 port 42072 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:34:58.383799 sshd[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:58.390926 systemd-logind[1538]: New session 12 of user core. Mar 7 01:34:58.395396 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:34:58.613302 sshd[6820]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:58.618987 systemd-logind[1538]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:34:58.621295 systemd[1]: sshd@11-172.238.171.132:22-68.220.241.50:42072.service: Deactivated successfully. Mar 7 01:34:58.625219 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:34:58.628056 systemd-logind[1538]: Removed session 12. Mar 7 01:35:03.642219 systemd[1]: Started sshd@12-172.238.171.132:22-68.220.241.50:43034.service - OpenSSH per-connection server daemon (68.220.241.50:43034). Mar 7 01:35:03.795933 sshd[6886]: Accepted publickey for core from 68.220.241.50 port 43034 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:35:03.798114 sshd[6886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:35:03.803665 systemd-logind[1538]: New session 13 of user core. Mar 7 01:35:03.809313 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:35:04.010371 sshd[6886]: pam_unix(sshd:session): session closed for user core Mar 7 01:35:04.017236 systemd-logind[1538]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:35:04.017451 systemd[1]: sshd@12-172.238.171.132:22-68.220.241.50:43034.service: Deactivated successfully. Mar 7 01:35:04.022254 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:35:04.023361 systemd-logind[1538]: Removed session 13. Mar 7 01:35:04.048188 systemd[1]: Started sshd@13-172.238.171.132:22-68.220.241.50:43050.service - OpenSSH per-connection server daemon (68.220.241.50:43050). Mar 7 01:35:04.234797 sshd[6900]: Accepted publickey for core from 68.220.241.50 port 43050 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:35:04.235796 sshd[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:35:04.240301 systemd-logind[1538]: New session 14 of user core. Mar 7 01:35:04.246271 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:35:04.640513 sshd[6900]: pam_unix(sshd:session): session closed for user core Mar 7 01:35:04.646223 systemd[1]: sshd@13-172.238.171.132:22-68.220.241.50:43050.service: Deactivated successfully. Mar 7 01:35:04.647393 systemd-logind[1538]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:35:04.651445 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:35:04.653346 systemd-logind[1538]: Removed session 14. Mar 7 01:35:04.664100 systemd[1]: Started sshd@14-172.238.171.132:22-68.220.241.50:43062.service - OpenSSH per-connection server daemon (68.220.241.50:43062). Mar 7 01:35:04.816711 sshd[6912]: Accepted publickey for core from 68.220.241.50 port 43062 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:35:04.818895 sshd[6912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:35:04.824100 systemd-logind[1538]: New session 15 of user core. Mar 7 01:35:04.828316 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:35:05.512454 sshd[6912]: pam_unix(sshd:session): session closed for user core Mar 7 01:35:05.515393 systemd-logind[1538]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:35:05.516493 systemd[1]: sshd@14-172.238.171.132:22-68.220.241.50:43062.service: Deactivated successfully. Mar 7 01:35:05.526189 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:35:05.527998 systemd-logind[1538]: Removed session 15. Mar 7 01:35:05.544391 systemd[1]: Started sshd@15-172.238.171.132:22-68.220.241.50:43068.service - OpenSSH per-connection server daemon (68.220.241.50:43068). Mar 7 01:35:05.603395 systemd[1]: run-containerd-runc-k8s.io-739a81f89d0fb4cefd3d11f48cb86fe1c9623d93fe75b004fe817520352f2666-runc.5dKg3u.mount: Deactivated successfully. Mar 7 01:35:05.721874 sshd[6954]: Accepted publickey for core from 68.220.241.50 port 43068 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:35:05.723902 sshd[6954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:35:05.729392 systemd-logind[1538]: New session 16 of user core. Mar 7 01:35:05.737348 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:35:06.093170 sshd[6954]: pam_unix(sshd:session): session closed for user core Mar 7 01:35:06.098165 systemd-logind[1538]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:35:06.100222 systemd[1]: sshd@15-172.238.171.132:22-68.220.241.50:43068.service: Deactivated successfully. Mar 7 01:35:06.106674 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:35:06.107925 systemd-logind[1538]: Removed session 16. Mar 7 01:35:06.128261 systemd[1]: Started sshd@16-172.238.171.132:22-68.220.241.50:43080.service - OpenSSH per-connection server daemon (68.220.241.50:43080). Mar 7 01:35:06.312782 sshd[6991]: Accepted publickey for core from 68.220.241.50 port 43080 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:35:06.315575 sshd[6991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:35:06.321734 systemd-logind[1538]: New session 17 of user core. Mar 7 01:35:06.326499 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:35:06.565305 sshd[6991]: pam_unix(sshd:session): session closed for user core Mar 7 01:35:06.569197 systemd[1]: sshd@16-172.238.171.132:22-68.220.241.50:43080.service: Deactivated successfully. Mar 7 01:35:06.574044 systemd-logind[1538]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:35:06.574577 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:35:06.576886 systemd-logind[1538]: Removed session 17. Mar 7 01:35:11.591205 systemd[1]: Started sshd@17-172.238.171.132:22-68.220.241.50:43086.service - OpenSSH per-connection server daemon (68.220.241.50:43086). Mar 7 01:35:11.741993 sshd[7007]: Accepted publickey for core from 68.220.241.50 port 43086 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:35:11.743893 sshd[7007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:35:11.751409 systemd-logind[1538]: New session 18 of user core. Mar 7 01:35:11.756185 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:35:11.940895 sshd[7007]: pam_unix(sshd:session): session closed for user core Mar 7 01:35:11.946509 systemd[1]: sshd@17-172.238.171.132:22-68.220.241.50:43086.service: Deactivated successfully. Mar 7 01:35:11.947715 systemd-logind[1538]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:35:11.951198 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:35:11.952709 systemd-logind[1538]: Removed session 18. Mar 7 01:35:16.971406 systemd[1]: Started sshd@18-172.238.171.132:22-68.220.241.50:48548.service - OpenSSH per-connection server daemon (68.220.241.50:48548). Mar 7 01:35:17.120228 sshd[7021]: Accepted publickey for core from 68.220.241.50 port 48548 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:35:17.122090 sshd[7021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:35:17.127196 systemd-logind[1538]: New session 19 of user core. Mar 7 01:35:17.133175 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:35:17.305434 sshd[7021]: pam_unix(sshd:session): session closed for user core Mar 7 01:35:17.311757 systemd-logind[1538]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:35:17.313008 systemd[1]: sshd@18-172.238.171.132:22-68.220.241.50:48548.service: Deactivated successfully. Mar 7 01:35:17.316879 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:35:17.318064 systemd-logind[1538]: Removed session 19.