Mar 14 00:12:42.980657 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:12:42.980680 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:42.980688 kernel: BIOS-provided physical RAM map: Mar 14 00:12:42.980695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 14 00:12:42.980700 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 14 00:12:42.980709 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:12:42.980715 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 14 00:12:42.980721 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 14 00:12:42.980726 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:12:42.980732 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:12:42.980738 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:12:42.980744 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:12:42.980749 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 14 00:12:42.980758 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:12:42.980765 kernel: NX (Execute Disable) protection: active Mar 14 00:12:42.980771 kernel: APIC: Static calls initialized Mar 14 00:12:42.980777 kernel: SMBIOS 2.8 present. Mar 14 00:12:42.980783 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 14 00:12:42.980789 kernel: Hypervisor detected: KVM Mar 14 00:12:42.980798 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:12:42.980804 kernel: kvm-clock: using sched offset of 6000722411 cycles Mar 14 00:12:42.980810 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:12:42.980816 kernel: tsc: Detected 2000.002 MHz processor Mar 14 00:12:42.980823 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:12:42.980829 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:12:42.980836 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 14 00:12:42.980842 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:12:42.980848 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:12:42.980856 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 14 00:12:42.980863 kernel: Using GB pages for direct mapping Mar 14 00:12:42.980869 kernel: ACPI: Early table checksum verification disabled Mar 14 00:12:42.980875 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 14 00:12:42.980881 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980887 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980893 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980899 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 14 00:12:42.980905 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980914 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980920 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980926 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980936 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 14 00:12:42.980943 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 14 00:12:42.980949 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 14 00:12:42.980958 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 14 00:12:42.980964 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 14 00:12:42.980971 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 14 00:12:42.980977 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 14 00:12:42.980989 kernel: No NUMA configuration found Mar 14 00:12:42.981000 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 14 00:12:42.981012 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 14 00:12:42.981023 kernel: Zone ranges: Mar 14 00:12:42.981040 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:12:42.981051 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:12:42.981058 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 14 00:12:42.981065 kernel: Movable zone start for each node Mar 14 00:12:42.981071 kernel: Early memory node ranges Mar 14 00:12:42.981078 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:12:42.981084 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 14 00:12:42.981090 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 14 00:12:42.981097 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 14 00:12:42.981103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:12:42.981113 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:12:42.981119 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 14 00:12:42.981126 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:12:42.981132 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:12:42.981139 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:12:42.981145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:12:42.981152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:12:42.981158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:12:42.981165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:12:42.981174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:12:42.981180 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:12:42.981187 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:12:42.981193 kernel: TSC deadline timer available Mar 14 00:12:42.981200 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:12:42.981206 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:12:42.981212 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:12:42.981219 kernel: kvm-guest: setup PV sched yield Mar 14 00:12:42.981225 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:12:42.981234 kernel: Booting paravirtualized kernel on KVM Mar 14 00:12:42.981241 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:12:42.981247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:12:42.981254 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:12:42.981260 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:12:42.981267 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:12:42.981273 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:12:42.981280 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:12:42.981287 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:42.981296 kernel: random: crng init done Mar 14 00:12:42.981303 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:12:42.981310 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:12:42.981316 kernel: Fallback order for Node 0: 0 Mar 14 00:12:42.981323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 14 00:12:42.981329 kernel: Policy zone: Normal Mar 14 00:12:42.981335 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:12:42.981342 kernel: software IO TLB: area num 2. Mar 14 00:12:42.981351 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Mar 14 00:12:42.981357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:12:42.981364 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:12:42.981370 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:12:42.981377 kernel: Dynamic Preempt: voluntary Mar 14 00:12:42.981383 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:12:42.981391 kernel: rcu: RCU event tracing is enabled. Mar 14 00:12:42.981397 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:12:42.981404 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:12:42.981413 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:12:42.981420 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:12:42.981426 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:12:42.981433 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:12:42.981439 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:12:42.981445 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:12:42.981452 kernel: Console: colour VGA+ 80x25 Mar 14 00:12:42.981458 kernel: printk: console [tty0] enabled Mar 14 00:12:42.981465 kernel: printk: console [ttyS0] enabled Mar 14 00:12:42.981473 kernel: ACPI: Core revision 20230628 Mar 14 00:12:42.981480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:12:42.981486 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:12:42.981493 kernel: x2apic enabled Mar 14 00:12:42.981508 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:12:42.981517 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:12:42.981524 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:12:42.981531 kernel: kvm-guest: setup PV IPIs Mar 14 00:12:42.981537 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:12:42.981544 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:12:42.981551 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Mar 14 00:12:42.981558 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:12:42.981567 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:12:42.981574 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:12:42.981580 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:12:42.981587 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:12:42.981594 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:12:42.981603 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 14 00:12:42.981610 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:12:42.981617 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:12:42.981636 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:12:42.981643 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:12:42.981650 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:12:42.981656 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:12:42.981663 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:12:42.981673 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:12:42.981679 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:12:42.981686 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:12:42.981693 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:12:42.981700 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:12:42.981707 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:12:42.981713 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 14 00:12:42.981720 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 14 00:12:42.981727 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:12:42.981736 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:12:42.981743 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:12:42.981750 kernel: landlock: Up and running. Mar 14 00:12:42.981756 kernel: SELinux: Initializing. Mar 14 00:12:42.981763 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.981770 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.981777 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:12:42.981784 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:42.981793 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:42.981800 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:42.981806 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 14 00:12:42.981813 kernel: ... version: 0 Mar 14 00:12:42.981820 kernel: ... bit width: 48 Mar 14 00:12:42.981826 kernel: ... generic registers: 6 Mar 14 00:12:42.981833 kernel: ... value mask: 0000ffffffffffff Mar 14 00:12:42.981840 kernel: ... max period: 00007fffffffffff Mar 14 00:12:42.981847 kernel: ... fixed-purpose events: 0 Mar 14 00:12:42.981853 kernel: ... event mask: 000000000000003f Mar 14 00:12:42.981863 kernel: signal: max sigframe size: 3376 Mar 14 00:12:42.981869 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:12:42.981876 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:12:42.981883 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:12:42.981889 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:12:42.981896 kernel: .... node #0, CPUs: #1 Mar 14 00:12:42.981903 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:12:42.981909 kernel: smpboot: Max logical packages: 1 Mar 14 00:12:42.981916 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Mar 14 00:12:42.981925 kernel: devtmpfs: initialized Mar 14 00:12:42.981932 kernel: x86/mm: Memory block size: 128MB Mar 14 00:12:42.981939 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:12:42.981946 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:12:42.981952 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:12:42.981959 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:12:42.981966 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:12:42.981972 kernel: audit: type=2000 audit(1773447162.108:1): state=initialized audit_enabled=0 res=1 Mar 14 00:12:42.981979 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:12:42.981988 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:12:42.981995 kernel: cpuidle: using governor menu Mar 14 00:12:42.982002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:12:42.982008 kernel: dca service started, version 1.12.1 Mar 14 00:12:42.982015 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:12:42.982022 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:12:42.982028 kernel: PCI: Using configuration type 1 for base access Mar 14 00:12:42.982035 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:12:42.982042 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:12:42.982051 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:12:42.982058 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:12:42.982065 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:12:42.982071 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:12:42.982078 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:12:42.982085 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:12:42.982092 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:12:42.982098 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:12:42.982105 kernel: ACPI: Interpreter enabled Mar 14 00:12:42.982114 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:12:42.982121 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:12:42.982128 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:12:42.982134 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:12:42.982141 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:12:42.982148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:12:42.982364 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:12:42.982505 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:12:42.984779 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:12:42.984793 kernel: PCI host bridge to bus 0000:00 Mar 14 00:12:42.984929 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:12:42.985048 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:12:42.985162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:12:42.985275 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 14 00:12:42.985387 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:12:42.985506 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 14 00:12:42.985619 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:12:42.985788 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:12:42.985922 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:12:42.986047 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:12:42.986169 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:12:42.986299 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:12:42.986422 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:12:42.986555 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 14 00:12:42.988732 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 14 00:12:42.988869 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:12:42.988996 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:12:42.989130 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:12:42.989262 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 14 00:12:42.989388 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:12:42.989511 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:12:42.991679 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:12:42.991829 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:12:42.991957 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:12:42.992099 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:12:42.992223 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 14 00:12:42.995734 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:12:42.995878 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:12:42.996004 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:12:42.996014 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:12:42.996022 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:12:42.996028 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:12:42.996040 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:12:42.996047 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:12:42.996054 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:12:42.996061 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:12:42.996067 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:12:42.996074 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:12:42.996081 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:12:42.996087 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:12:42.996094 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:12:42.996104 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:12:42.996111 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:12:42.996117 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:12:42.996124 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:12:42.996131 kernel: iommu: Default domain type: Translated Mar 14 00:12:42.996137 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:12:42.996144 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:12:42.996151 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:12:42.996157 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 14 00:12:42.996167 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 14 00:12:42.996291 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:12:42.996414 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:12:42.996536 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:12:42.996546 kernel: vgaarb: loaded Mar 14 00:12:42.996553 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:12:42.996560 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:12:42.996566 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:12:42.996577 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:12:42.996584 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:12:42.996591 kernel: pnp: PnP ACPI init Mar 14 00:12:42.996771 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:12:42.996783 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:12:42.996791 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:12:42.996797 kernel: NET: Registered PF_INET protocol family Mar 14 00:12:42.996804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:12:42.996815 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:12:42.996822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:12:42.996829 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:12:42.996835 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:12:42.996842 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:12:42.996849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.996856 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.996862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:12:42.996869 kernel: NET: Registered PF_XDP protocol family Mar 14 00:12:42.996990 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:12:42.997104 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:12:42.997217 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:12:42.997329 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 14 00:12:42.997441 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:12:42.997554 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 14 00:12:42.997563 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:12:42.997570 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:12:42.997580 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 14 00:12:42.997587 kernel: Initialise system trusted keyrings Mar 14 00:12:42.997594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:12:42.997601 kernel: Key type asymmetric registered Mar 14 00:12:42.997607 kernel: Asymmetric key parser 'x509' registered Mar 14 00:12:42.997614 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:12:42.999357 kernel: io scheduler mq-deadline registered Mar 14 00:12:42.999370 kernel: io scheduler kyber registered Mar 14 00:12:42.999378 kernel: io scheduler bfq registered Mar 14 00:12:42.999389 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:12:42.999397 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:12:42.999404 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:12:42.999411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:12:42.999418 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:12:42.999425 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:12:42.999432 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:12:42.999439 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:12:42.999446 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:12:42.999612 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 14 00:12:42.999783 kernel: rtc_cmos 00:03: registered as rtc0 Mar 14 00:12:42.999910 kernel: rtc_cmos 00:03: setting system clock to 2026-03-14T00:12:42 UTC (1773447162) Mar 14 00:12:43.000032 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:12:43.000042 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:12:43.000049 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:12:43.000056 kernel: Segment Routing with IPv6 Mar 14 00:12:43.000063 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:12:43.000075 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:12:43.000082 kernel: Key type dns_resolver registered Mar 14 00:12:43.000089 kernel: IPI shorthand broadcast: enabled Mar 14 00:12:43.000096 kernel: sched_clock: Marking stable (892003712, 332785766)->(1366622363, -141832885) Mar 14 00:12:43.000102 kernel: registered taskstats version 1 Mar 14 00:12:43.000109 kernel: Loading compiled-in X.509 certificates Mar 14 00:12:43.000116 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:12:43.000123 kernel: Key type .fscrypt registered Mar 14 00:12:43.000129 kernel: Key type fscrypt-provisioning registered Mar 14 00:12:43.000139 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:12:43.000145 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:12:43.000152 kernel: ima: No architecture policies found Mar 14 00:12:43.000159 kernel: clk: Disabling unused clocks Mar 14 00:12:43.000166 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:12:43.000172 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:12:43.000179 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:12:43.000186 kernel: Run /init as init process Mar 14 00:12:43.000192 kernel: with arguments: Mar 14 00:12:43.000202 kernel: /init Mar 14 00:12:43.000209 kernel: with environment: Mar 14 00:12:43.000216 kernel: HOME=/ Mar 14 00:12:43.000222 kernel: TERM=linux Mar 14 00:12:43.000231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:12:43.000239 systemd[1]: Detected virtualization kvm. Mar 14 00:12:43.000247 systemd[1]: Detected architecture x86-64. Mar 14 00:12:43.000254 systemd[1]: Running in initrd. Mar 14 00:12:43.000264 systemd[1]: No hostname configured, using default hostname. Mar 14 00:12:43.000270 systemd[1]: Hostname set to . Mar 14 00:12:43.000278 systemd[1]: Initializing machine ID from random generator. Mar 14 00:12:43.000285 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:12:43.000292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:43.000315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:43.000333 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:12:43.000346 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:12:43.000360 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:12:43.000374 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:12:43.000388 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:12:43.000396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:12:43.000407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:43.000415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:43.000422 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:12:43.000430 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:12:43.000437 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:12:43.000444 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:12:43.000452 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:43.000459 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:43.000466 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:12:43.000476 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:12:43.000484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:43.000491 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:43.000499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:43.000506 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:12:43.000514 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:12:43.000521 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:12:43.000529 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:12:43.000538 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:12:43.000546 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:12:43.000553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:12:43.000582 systemd-journald[178]: Collecting audit messages is disabled. Mar 14 00:12:43.000602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:43.000610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:43.001893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:43.001911 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:12:43.001925 systemd-journald[178]: Journal started Mar 14 00:12:43.001943 systemd-journald[178]: Runtime Journal (/run/log/journal/9df29ce82f984454b553640aff0e0d95) is 8.0M, max 78.3M, 70.3M free. Mar 14 00:12:43.000725 systemd-modules-load[179]: Inserted module 'overlay' Mar 14 00:12:43.091994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:12:43.092015 kernel: Bridge firewalling registered Mar 14 00:12:43.092026 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:12:43.025433 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 14 00:12:43.092940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:43.094324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:43.100743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:43.103281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:12:43.107104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:12:43.109508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:12:43.117685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:43.141070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:43.143248 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:12:43.145609 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:43.150775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:12:43.155994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:43.164816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:12:43.167907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:43.170347 dracut-cmdline[206]: dracut-dracut-053 Mar 14 00:12:43.174891 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:43.201540 systemd-resolved[215]: Positive Trust Anchors: Mar 14 00:12:43.201551 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:12:43.201579 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:12:43.208266 systemd-resolved[215]: Defaulting to hostname 'linux'. Mar 14 00:12:43.209318 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:12:43.210560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:43.245638 kernel: SCSI subsystem initialized Mar 14 00:12:43.253643 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:12:43.264654 kernel: iscsi: registered transport (tcp) Mar 14 00:12:43.284876 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:12:43.284906 kernel: QLogic iSCSI HBA Driver Mar 14 00:12:43.326120 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:43.330770 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:12:43.356799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:12:43.356834 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:12:43.358891 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:12:43.402647 kernel: raid6: avx2x4 gen() 31609 MB/s Mar 14 00:12:43.420644 kernel: raid6: avx2x2 gen() 30518 MB/s Mar 14 00:12:43.438746 kernel: raid6: avx2x1 gen() 25687 MB/s Mar 14 00:12:43.438764 kernel: raid6: using algorithm avx2x4 gen() 31609 MB/s Mar 14 00:12:43.458957 kernel: raid6: .... xor() 5187 MB/s, rmw enabled Mar 14 00:12:43.458976 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:12:43.480650 kernel: xor: automatically using best checksumming function avx Mar 14 00:12:43.611655 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:12:43.622835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:43.629753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:43.641577 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 14 00:12:43.646157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:43.654786 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:12:43.669555 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 14 00:12:43.698435 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:43.703748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:12:43.774019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:43.781804 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:12:43.797355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:43.803096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:43.803881 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:43.805543 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:12:43.813743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:12:43.828190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:43.854658 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:12:43.862647 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:12:44.020141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:44.023941 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:12:44.020273 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:44.022252 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:44.023050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:44.028749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:44.030135 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:44.070447 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:12:44.070856 kernel: AES CTR mode by8 optimization enabled Mar 14 00:12:44.070847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:44.086566 kernel: libata version 3.00 loaded. Mar 14 00:12:44.096666 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:12:44.096877 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:12:44.101684 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:12:44.101880 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:12:44.106604 kernel: scsi host1: ahci Mar 14 00:12:44.109072 kernel: scsi host2: ahci Mar 14 00:12:44.111677 kernel: scsi host3: ahci Mar 14 00:12:44.112642 kernel: scsi host4: ahci Mar 14 00:12:44.116645 kernel: scsi host5: ahci Mar 14 00:12:44.119762 kernel: scsi host6: ahci Mar 14 00:12:44.119943 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Mar 14 00:12:44.119955 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Mar 14 00:12:44.119965 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Mar 14 00:12:44.119974 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Mar 14 00:12:44.119990 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Mar 14 00:12:44.119999 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Mar 14 00:12:44.222012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:44.227848 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:44.242314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:44.426669 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.435638 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.435678 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.438908 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.439643 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.441647 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.454998 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 14 00:12:44.481104 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 14 00:12:44.481308 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 14 00:12:44.482658 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 14 00:12:44.482835 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:12:44.492268 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:12:44.492316 kernel: GPT:9289727 != 167739391 Mar 14 00:12:44.492328 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:12:44.494943 kernel: GPT:9289727 != 167739391 Mar 14 00:12:44.497938 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:12:44.497956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:44.501967 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 14 00:12:44.535300 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (458) Mar 14 00:12:44.542655 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (445) Mar 14 00:12:44.545726 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:12:44.553579 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:12:44.560135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:12:44.564765 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:12:44.565566 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:12:44.573752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:12:44.587348 disk-uuid[569]: Primary Header is updated. Mar 14 00:12:44.587348 disk-uuid[569]: Secondary Entries is updated. Mar 14 00:12:44.587348 disk-uuid[569]: Secondary Header is updated. Mar 14 00:12:44.593655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:44.599654 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:45.603647 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:45.604949 disk-uuid[570]: The operation has completed successfully. Mar 14 00:12:45.657235 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:12:45.657368 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:12:45.666737 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:12:45.670071 sh[584]: Success Mar 14 00:12:45.683705 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:12:45.734058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:12:45.737733 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:12:45.738789 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:12:45.780747 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:12:45.780776 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:45.780788 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:12:45.786542 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:12:45.786574 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:12:45.797651 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:12:45.799346 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:12:45.800713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:12:45.806763 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:12:45.810781 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:12:45.829597 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:45.829647 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:45.829661 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:12:45.839093 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:12:45.839118 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:12:45.854988 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:45.854598 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:12:45.861854 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:12:45.870799 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:12:45.918425 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:45.925800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:12:45.951912 systemd-networkd[765]: lo: Link UP Mar 14 00:12:45.952670 ignition[704]: Ignition 2.19.0 Mar 14 00:12:45.951920 systemd-networkd[765]: lo: Gained carrier Mar 14 00:12:45.952677 ignition[704]: Stage: fetch-offline Mar 14 00:12:45.955040 systemd-networkd[765]: Enumeration completed Mar 14 00:12:45.952723 ignition[704]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:45.955763 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:12:45.952735 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:45.956500 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:45.952832 ignition[704]: parsed url from cmdline: "" Mar 14 00:12:45.956505 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:12:45.952836 ignition[704]: no config URL provided Mar 14 00:12:45.958009 systemd-networkd[765]: eth0: Link UP Mar 14 00:12:45.952842 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:12:45.958014 systemd-networkd[765]: eth0: Gained carrier Mar 14 00:12:45.952854 ignition[704]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:12:45.958021 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:45.952859 ignition[704]: failed to fetch config: resource requires networking Mar 14 00:12:45.959172 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:45.953516 ignition[704]: Ignition finished successfully Mar 14 00:12:45.960905 systemd[1]: Reached target network.target - Network. Mar 14 00:12:45.968815 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:12:45.980753 ignition[772]: Ignition 2.19.0 Mar 14 00:12:45.980764 ignition[772]: Stage: fetch Mar 14 00:12:45.980918 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:45.980930 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:45.981015 ignition[772]: parsed url from cmdline: "" Mar 14 00:12:45.981020 ignition[772]: no config URL provided Mar 14 00:12:45.981025 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:12:45.981035 ignition[772]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:12:45.981052 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 14 00:12:45.981231 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:12:46.181839 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 14 00:12:46.182075 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:12:46.582269 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 14 00:12:46.582458 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:12:46.626680 systemd-networkd[765]: eth0: DHCPv4 address 172.233.218.137/24, gateway 172.233.218.1 acquired from 23.40.197.110 Mar 14 00:12:47.383482 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 14 00:12:47.481595 ignition[772]: PUT result: OK Mar 14 00:12:47.481710 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 14 00:12:47.590369 ignition[772]: GET result: OK Mar 14 00:12:47.590484 ignition[772]: parsing config with SHA512: b1ad82fe78f0bdb7812e1ddc68e79d263f19660318f8547d2d93e2df27627d0bd04cffab8a9c87d8e4b8dfd6c3884cd8fc97d03c3091f8cb88ac863e6963936b Mar 14 00:12:47.594003 unknown[772]: fetched base config from "system" Mar 14 00:12:47.594922 ignition[772]: fetch: fetch complete Mar 14 00:12:47.594013 unknown[772]: fetched base config from "system" Mar 14 00:12:47.594928 ignition[772]: fetch: fetch passed Mar 14 00:12:47.594023 unknown[772]: fetched user config from "akamai" Mar 14 00:12:47.594970 ignition[772]: Ignition finished successfully Mar 14 00:12:47.599033 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:12:47.605782 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:12:47.618128 ignition[779]: Ignition 2.19.0 Mar 14 00:12:47.618143 ignition[779]: Stage: kargs Mar 14 00:12:47.618329 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:47.618341 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:47.621197 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:12:47.619727 ignition[779]: kargs: kargs passed Mar 14 00:12:47.619780 ignition[779]: Ignition finished successfully Mar 14 00:12:47.629847 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:12:47.645770 ignition[786]: Ignition 2.19.0 Mar 14 00:12:47.646722 ignition[786]: Stage: disks Mar 14 00:12:47.646881 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:47.646893 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:47.648963 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:12:47.647524 ignition[786]: disks: disks passed Mar 14 00:12:47.671191 systemd-networkd[765]: eth0: Gained IPv6LL Mar 14 00:12:47.647564 ignition[786]: Ignition finished successfully Mar 14 00:12:47.672211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:47.673832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:12:47.675351 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:12:47.676729 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:12:47.678099 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:12:47.685797 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:12:47.701452 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:12:47.706215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:12:47.713694 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:12:47.796653 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:12:47.797546 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:12:47.798811 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:12:47.805707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:47.807729 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:12:47.810492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:12:47.811720 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:12:47.811743 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:47.832212 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (802) Mar 14 00:12:47.832235 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:47.832249 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:47.832261 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:12:47.832280 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:12:47.832292 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:12:47.825357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:12:47.834450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:47.840775 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:12:47.885639 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:12:47.890728 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:12:47.895465 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:12:47.900695 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:12:47.983337 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:47.988710 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:12:47.990768 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:12:47.999061 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:12:48.002855 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:48.025246 ignition[915]: INFO : Ignition 2.19.0 Mar 14 00:12:48.028157 ignition[915]: INFO : Stage: mount Mar 14 00:12:48.028157 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:48.028157 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:48.028157 ignition[915]: INFO : mount: mount passed Mar 14 00:12:48.028157 ignition[915]: INFO : Ignition finished successfully Mar 14 00:12:48.027825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:12:48.029945 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:12:48.034732 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:12:48.804967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:48.819653 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (926) Mar 14 00:12:48.823706 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:48.823730 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:48.828006 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:12:48.834186 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:12:48.834215 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:12:48.837272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:48.860548 ignition[942]: INFO : Ignition 2.19.0 Mar 14 00:12:48.862424 ignition[942]: INFO : Stage: files Mar 14 00:12:48.862424 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:48.862424 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:48.862424 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:12:48.866075 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:12:48.866075 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:12:48.868116 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:12:48.869266 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:12:48.870315 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:12:48.869299 unknown[942]: wrote ssh authorized keys file for user: core Mar 14 00:12:48.872243 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:12:48.872243 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:12:49.104070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:12:49.202872 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:12:49.685910 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:12:50.492418 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:50.492418 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:50.495836 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:50.495836 ignition[942]: INFO : files: files passed Mar 14 00:12:50.495836 ignition[942]: INFO : Ignition finished successfully Mar 14 00:12:50.497085 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:12:50.527801 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:12:50.532696 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:12:50.537169 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:12:50.537279 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:12:50.548125 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:50.548125 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:50.551305 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:50.552719 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:50.554249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:12:50.560774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:12:50.591072 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:12:50.591190 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:12:50.592855 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:12:50.594312 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:12:50.595930 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:12:50.602773 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:12:50.614087 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:50.620754 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:12:50.629192 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:50.630053 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:50.631690 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:12:50.633353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:12:50.633450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:50.635421 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:12:50.636469 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:12:50.638093 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:12:50.639564 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:50.641026 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:50.642714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:12:50.644315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:50.645956 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:12:50.647514 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:12:50.649140 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:12:50.650665 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:12:50.650836 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:50.652537 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:50.653612 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:50.655073 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:12:50.655458 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:50.656754 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:12:50.656850 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:50.658989 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:12:50.659094 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:50.660191 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:12:50.660290 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:12:50.669756 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:12:50.671993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:12:50.672145 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:50.675810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:12:50.678677 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:12:50.679763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:50.683733 ignition[995]: INFO : Ignition 2.19.0 Mar 14 00:12:50.683733 ignition[995]: INFO : Stage: umount Mar 14 00:12:50.683733 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:50.683733 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:50.683733 ignition[995]: INFO : umount: umount passed Mar 14 00:12:50.683733 ignition[995]: INFO : Ignition finished successfully Mar 14 00:12:50.683224 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:12:50.683324 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:50.689673 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:12:50.689776 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:12:50.691186 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:12:50.691375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:12:50.693277 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:12:50.693327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:12:50.696190 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:12:50.696244 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:12:50.700858 systemd[1]: Stopped target network.target - Network. Mar 14 00:12:50.703691 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:12:50.703746 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:50.705010 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:12:50.705717 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:12:50.709912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:50.732972 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:12:50.734656 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:12:50.736247 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:12:50.736300 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:50.737904 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:12:50.737955 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:50.739562 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:12:50.739612 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:12:50.741183 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:12:50.741232 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:50.743109 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:12:50.744409 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:12:50.747297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:12:50.747949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:12:50.748052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:12:50.748674 systemd-networkd[765]: eth0: DHCPv6 lease lost Mar 14 00:12:50.751452 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:12:50.751562 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:12:50.753501 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:12:50.754981 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:12:50.756414 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:12:50.756596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:12:50.761399 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:12:50.761463 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:50.762995 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:12:50.763048 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:50.770713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:12:50.771450 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:12:50.771507 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:50.772345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:12:50.772400 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:50.773164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:12:50.773213 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:50.774682 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:12:50.774730 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:50.776399 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:50.789196 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:12:50.790110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:12:50.796294 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:12:50.796486 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:50.798172 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:12:50.798223 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:50.799575 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:12:50.799616 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:50.801140 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:12:50.801191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:50.803337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:12:50.803386 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:50.804933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:50.804981 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:50.811782 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:12:50.812854 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:12:50.812909 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:50.813702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:50.813752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:50.819589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:12:50.819816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:12:50.821743 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:12:50.826816 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:12:50.836228 systemd[1]: Switching root. Mar 14 00:12:50.866621 systemd-journald[178]: Journal stopped Mar 14 00:12:42.980657 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:12:42.980680 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:42.980688 kernel: BIOS-provided physical RAM map: Mar 14 00:12:42.980695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 14 00:12:42.980700 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 14 00:12:42.980709 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:12:42.980715 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 14 00:12:42.980721 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 14 00:12:42.980726 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:12:42.980732 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:12:42.980738 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:12:42.980744 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:12:42.980749 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 14 00:12:42.980758 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:12:42.980765 kernel: NX (Execute Disable) protection: active Mar 14 00:12:42.980771 kernel: APIC: Static calls initialized Mar 14 00:12:42.980777 kernel: SMBIOS 2.8 present. Mar 14 00:12:42.980783 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 14 00:12:42.980789 kernel: Hypervisor detected: KVM Mar 14 00:12:42.980798 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:12:42.980804 kernel: kvm-clock: using sched offset of 6000722411 cycles Mar 14 00:12:42.980810 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:12:42.980816 kernel: tsc: Detected 2000.002 MHz processor Mar 14 00:12:42.980823 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:12:42.980829 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:12:42.980836 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 14 00:12:42.980842 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:12:42.980848 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:12:42.980856 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 14 00:12:42.980863 kernel: Using GB pages for direct mapping Mar 14 00:12:42.980869 kernel: ACPI: Early table checksum verification disabled Mar 14 00:12:42.980875 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 14 00:12:42.980881 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980887 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980893 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980899 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 14 00:12:42.980905 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980914 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980920 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980926 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:12:42.980936 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 14 00:12:42.980943 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 14 00:12:42.980949 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 14 00:12:42.980958 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 14 00:12:42.980964 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 14 00:12:42.980971 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 14 00:12:42.980977 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 14 00:12:42.980989 kernel: No NUMA configuration found Mar 14 00:12:42.981000 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 14 00:12:42.981012 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 14 00:12:42.981023 kernel: Zone ranges: Mar 14 00:12:42.981040 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:12:42.981051 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:12:42.981058 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 14 00:12:42.981065 kernel: Movable zone start for each node Mar 14 00:12:42.981071 kernel: Early memory node ranges Mar 14 00:12:42.981078 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:12:42.981084 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 14 00:12:42.981090 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 14 00:12:42.981097 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 14 00:12:42.981103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:12:42.981113 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:12:42.981119 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 14 00:12:42.981126 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:12:42.981132 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:12:42.981139 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:12:42.981145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:12:42.981152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:12:42.981158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:12:42.981165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:12:42.981174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:12:42.981180 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:12:42.981187 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:12:42.981193 kernel: TSC deadline timer available Mar 14 00:12:42.981200 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:12:42.981206 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:12:42.981212 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:12:42.981219 kernel: kvm-guest: setup PV sched yield Mar 14 00:12:42.981225 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:12:42.981234 kernel: Booting paravirtualized kernel on KVM Mar 14 00:12:42.981241 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:12:42.981247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:12:42.981254 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:12:42.981260 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:12:42.981267 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:12:42.981273 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:12:42.981280 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:12:42.981287 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:42.981296 kernel: random: crng init done Mar 14 00:12:42.981303 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:12:42.981310 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:12:42.981316 kernel: Fallback order for Node 0: 0 Mar 14 00:12:42.981323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 14 00:12:42.981329 kernel: Policy zone: Normal Mar 14 00:12:42.981335 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:12:42.981342 kernel: software IO TLB: area num 2. Mar 14 00:12:42.981351 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Mar 14 00:12:42.981357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:12:42.981364 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:12:42.981370 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:12:42.981377 kernel: Dynamic Preempt: voluntary Mar 14 00:12:42.981383 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:12:42.981391 kernel: rcu: RCU event tracing is enabled. Mar 14 00:12:42.981397 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:12:42.981404 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:12:42.981413 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:12:42.981420 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:12:42.981426 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:12:42.981433 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:12:42.981439 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:12:42.981445 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:12:42.981452 kernel: Console: colour VGA+ 80x25 Mar 14 00:12:42.981458 kernel: printk: console [tty0] enabled Mar 14 00:12:42.981465 kernel: printk: console [ttyS0] enabled Mar 14 00:12:42.981473 kernel: ACPI: Core revision 20230628 Mar 14 00:12:42.981480 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:12:42.981486 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:12:42.981493 kernel: x2apic enabled Mar 14 00:12:42.981508 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:12:42.981517 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:12:42.981524 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:12:42.981531 kernel: kvm-guest: setup PV IPIs Mar 14 00:12:42.981537 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:12:42.981544 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:12:42.981551 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Mar 14 00:12:42.981558 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:12:42.981567 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:12:42.981574 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:12:42.981580 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:12:42.981587 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:12:42.981594 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:12:42.981603 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 14 00:12:42.981610 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:12:42.981617 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:12:42.981636 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:12:42.981643 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:12:42.981650 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:12:42.981656 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:12:42.981663 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:12:42.981673 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:12:42.981679 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:12:42.981686 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:12:42.981693 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:12:42.981700 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:12:42.981707 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:12:42.981713 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 14 00:12:42.981720 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 14 00:12:42.981727 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:12:42.981736 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:12:42.981743 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:12:42.981750 kernel: landlock: Up and running. Mar 14 00:12:42.981756 kernel: SELinux: Initializing. Mar 14 00:12:42.981763 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.981770 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.981777 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:12:42.981784 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:42.981793 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:42.981800 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:42.981806 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 14 00:12:42.981813 kernel: ... version: 0 Mar 14 00:12:42.981820 kernel: ... bit width: 48 Mar 14 00:12:42.981826 kernel: ... generic registers: 6 Mar 14 00:12:42.981833 kernel: ... value mask: 0000ffffffffffff Mar 14 00:12:42.981840 kernel: ... max period: 00007fffffffffff Mar 14 00:12:42.981847 kernel: ... fixed-purpose events: 0 Mar 14 00:12:42.981853 kernel: ... event mask: 000000000000003f Mar 14 00:12:42.981863 kernel: signal: max sigframe size: 3376 Mar 14 00:12:42.981869 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:12:42.981876 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:12:42.981883 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:12:42.981889 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:12:42.981896 kernel: .... node #0, CPUs: #1 Mar 14 00:12:42.981903 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:12:42.981909 kernel: smpboot: Max logical packages: 1 Mar 14 00:12:42.981916 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Mar 14 00:12:42.981925 kernel: devtmpfs: initialized Mar 14 00:12:42.981932 kernel: x86/mm: Memory block size: 128MB Mar 14 00:12:42.981939 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:12:42.981946 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:12:42.981952 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:12:42.981959 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:12:42.981966 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:12:42.981972 kernel: audit: type=2000 audit(1773447162.108:1): state=initialized audit_enabled=0 res=1 Mar 14 00:12:42.981979 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:12:42.981988 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:12:42.981995 kernel: cpuidle: using governor menu Mar 14 00:12:42.982002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:12:42.982008 kernel: dca service started, version 1.12.1 Mar 14 00:12:42.982015 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:12:42.982022 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:12:42.982028 kernel: PCI: Using configuration type 1 for base access Mar 14 00:12:42.982035 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:12:42.982042 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:12:42.982051 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:12:42.982058 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:12:42.982065 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:12:42.982071 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:12:42.982078 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:12:42.982085 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:12:42.982092 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:12:42.982098 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:12:42.982105 kernel: ACPI: Interpreter enabled Mar 14 00:12:42.982114 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:12:42.982121 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:12:42.982128 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:12:42.982134 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:12:42.982141 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:12:42.982148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:12:42.982364 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:12:42.982505 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:12:42.984779 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:12:42.984793 kernel: PCI host bridge to bus 0000:00 Mar 14 00:12:42.984929 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:12:42.985048 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:12:42.985162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:12:42.985275 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 14 00:12:42.985387 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:12:42.985506 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 14 00:12:42.985619 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:12:42.985788 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:12:42.985922 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:12:42.986047 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:12:42.986169 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:12:42.986299 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:12:42.986422 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:12:42.986555 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 14 00:12:42.988732 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 14 00:12:42.988869 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:12:42.988996 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:12:42.989130 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:12:42.989262 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 14 00:12:42.989388 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:12:42.989511 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:12:42.991679 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:12:42.991829 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:12:42.991957 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:12:42.992099 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:12:42.992223 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 14 00:12:42.995734 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:12:42.995878 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:12:42.996004 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:12:42.996014 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:12:42.996022 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:12:42.996028 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:12:42.996040 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:12:42.996047 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:12:42.996054 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:12:42.996061 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:12:42.996067 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:12:42.996074 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:12:42.996081 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:12:42.996087 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:12:42.996094 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:12:42.996104 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:12:42.996111 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:12:42.996117 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:12:42.996124 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:12:42.996131 kernel: iommu: Default domain type: Translated Mar 14 00:12:42.996137 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:12:42.996144 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:12:42.996151 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:12:42.996157 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 14 00:12:42.996167 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 14 00:12:42.996291 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:12:42.996414 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:12:42.996536 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:12:42.996546 kernel: vgaarb: loaded Mar 14 00:12:42.996553 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:12:42.996560 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:12:42.996566 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:12:42.996577 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:12:42.996584 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:12:42.996591 kernel: pnp: PnP ACPI init Mar 14 00:12:42.996771 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:12:42.996783 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:12:42.996791 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:12:42.996797 kernel: NET: Registered PF_INET protocol family Mar 14 00:12:42.996804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:12:42.996815 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:12:42.996822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:12:42.996829 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:12:42.996835 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:12:42.996842 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:12:42.996849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.996856 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:42.996862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:12:42.996869 kernel: NET: Registered PF_XDP protocol family Mar 14 00:12:42.996990 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:12:42.997104 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:12:42.997217 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:12:42.997329 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 14 00:12:42.997441 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:12:42.997554 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 14 00:12:42.997563 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:12:42.997570 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:12:42.997580 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 14 00:12:42.997587 kernel: Initialise system trusted keyrings Mar 14 00:12:42.997594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:12:42.997601 kernel: Key type asymmetric registered Mar 14 00:12:42.997607 kernel: Asymmetric key parser 'x509' registered Mar 14 00:12:42.997614 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:12:42.999357 kernel: io scheduler mq-deadline registered Mar 14 00:12:42.999370 kernel: io scheduler kyber registered Mar 14 00:12:42.999378 kernel: io scheduler bfq registered Mar 14 00:12:42.999389 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:12:42.999397 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:12:42.999404 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:12:42.999411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:12:42.999418 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:12:42.999425 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:12:42.999432 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:12:42.999439 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:12:42.999446 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:12:42.999612 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 14 00:12:42.999783 kernel: rtc_cmos 00:03: registered as rtc0 Mar 14 00:12:42.999910 kernel: rtc_cmos 00:03: setting system clock to 2026-03-14T00:12:42 UTC (1773447162) Mar 14 00:12:43.000032 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:12:43.000042 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:12:43.000049 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:12:43.000056 kernel: Segment Routing with IPv6 Mar 14 00:12:43.000063 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:12:43.000075 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:12:43.000082 kernel: Key type dns_resolver registered Mar 14 00:12:43.000089 kernel: IPI shorthand broadcast: enabled Mar 14 00:12:43.000096 kernel: sched_clock: Marking stable (892003712, 332785766)->(1366622363, -141832885) Mar 14 00:12:43.000102 kernel: registered taskstats version 1 Mar 14 00:12:43.000109 kernel: Loading compiled-in X.509 certificates Mar 14 00:12:43.000116 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:12:43.000123 kernel: Key type .fscrypt registered Mar 14 00:12:43.000129 kernel: Key type fscrypt-provisioning registered Mar 14 00:12:43.000139 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:12:43.000145 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:12:43.000152 kernel: ima: No architecture policies found Mar 14 00:12:43.000159 kernel: clk: Disabling unused clocks Mar 14 00:12:43.000166 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:12:43.000172 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:12:43.000179 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:12:43.000186 kernel: Run /init as init process Mar 14 00:12:43.000192 kernel: with arguments: Mar 14 00:12:43.000202 kernel: /init Mar 14 00:12:43.000209 kernel: with environment: Mar 14 00:12:43.000216 kernel: HOME=/ Mar 14 00:12:43.000222 kernel: TERM=linux Mar 14 00:12:43.000231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:12:43.000239 systemd[1]: Detected virtualization kvm. Mar 14 00:12:43.000247 systemd[1]: Detected architecture x86-64. Mar 14 00:12:43.000254 systemd[1]: Running in initrd. Mar 14 00:12:43.000264 systemd[1]: No hostname configured, using default hostname. Mar 14 00:12:43.000270 systemd[1]: Hostname set to . Mar 14 00:12:43.000278 systemd[1]: Initializing machine ID from random generator. Mar 14 00:12:43.000285 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:12:43.000292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:43.000315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:43.000333 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:12:43.000346 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:12:43.000360 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:12:43.000374 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:12:43.000388 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:12:43.000396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:12:43.000407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:43.000415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:43.000422 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:12:43.000430 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:12:43.000437 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:12:43.000444 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:12:43.000452 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:43.000459 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:43.000466 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:12:43.000476 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:12:43.000484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:43.000491 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:43.000499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:43.000506 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:12:43.000514 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:12:43.000521 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:12:43.000529 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:12:43.000538 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:12:43.000546 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:12:43.000553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:12:43.000582 systemd-journald[178]: Collecting audit messages is disabled. Mar 14 00:12:43.000602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:43.000610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:43.001893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:43.001911 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:12:43.001925 systemd-journald[178]: Journal started Mar 14 00:12:43.001943 systemd-journald[178]: Runtime Journal (/run/log/journal/9df29ce82f984454b553640aff0e0d95) is 8.0M, max 78.3M, 70.3M free. Mar 14 00:12:43.000725 systemd-modules-load[179]: Inserted module 'overlay' Mar 14 00:12:43.091994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:12:43.092015 kernel: Bridge firewalling registered Mar 14 00:12:43.092026 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:12:43.025433 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 14 00:12:43.092940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:43.094324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:43.100743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:43.103281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:12:43.107104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:12:43.109508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:12:43.117685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:43.141070 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:43.143248 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:12:43.145609 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:43.150775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:12:43.155994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:43.164816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:12:43.167907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:43.170347 dracut-cmdline[206]: dracut-dracut-053 Mar 14 00:12:43.174891 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:12:43.201540 systemd-resolved[215]: Positive Trust Anchors: Mar 14 00:12:43.201551 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:12:43.201579 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:12:43.208266 systemd-resolved[215]: Defaulting to hostname 'linux'. Mar 14 00:12:43.209318 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:12:43.210560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:43.245638 kernel: SCSI subsystem initialized Mar 14 00:12:43.253643 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:12:43.264654 kernel: iscsi: registered transport (tcp) Mar 14 00:12:43.284876 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:12:43.284906 kernel: QLogic iSCSI HBA Driver Mar 14 00:12:43.326120 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:43.330770 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:12:43.356799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:12:43.356834 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:12:43.358891 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:12:43.402647 kernel: raid6: avx2x4 gen() 31609 MB/s Mar 14 00:12:43.420644 kernel: raid6: avx2x2 gen() 30518 MB/s Mar 14 00:12:43.438746 kernel: raid6: avx2x1 gen() 25687 MB/s Mar 14 00:12:43.438764 kernel: raid6: using algorithm avx2x4 gen() 31609 MB/s Mar 14 00:12:43.458957 kernel: raid6: .... xor() 5187 MB/s, rmw enabled Mar 14 00:12:43.458976 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:12:43.480650 kernel: xor: automatically using best checksumming function avx Mar 14 00:12:43.611655 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:12:43.622835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:43.629753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:43.641577 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 14 00:12:43.646157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:43.654786 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:12:43.669555 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 14 00:12:43.698435 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:43.703748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:12:43.774019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:43.781804 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:12:43.797355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:43.803096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:43.803881 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:43.805543 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:12:43.813743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:12:43.828190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:43.854658 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:12:43.862647 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:12:44.020141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:44.023941 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:12:44.020273 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:44.022252 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:44.023050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:44.028749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:44.030135 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:44.070447 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:12:44.070856 kernel: AES CTR mode by8 optimization enabled Mar 14 00:12:44.070847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:44.086566 kernel: libata version 3.00 loaded. Mar 14 00:12:44.096666 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:12:44.096877 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:12:44.101684 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:12:44.101880 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:12:44.106604 kernel: scsi host1: ahci Mar 14 00:12:44.109072 kernel: scsi host2: ahci Mar 14 00:12:44.111677 kernel: scsi host3: ahci Mar 14 00:12:44.112642 kernel: scsi host4: ahci Mar 14 00:12:44.116645 kernel: scsi host5: ahci Mar 14 00:12:44.119762 kernel: scsi host6: ahci Mar 14 00:12:44.119943 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Mar 14 00:12:44.119955 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Mar 14 00:12:44.119965 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Mar 14 00:12:44.119974 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Mar 14 00:12:44.119990 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Mar 14 00:12:44.119999 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Mar 14 00:12:44.222012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:44.227848 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:44.242314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:44.426669 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.435638 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.435678 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.438908 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.439643 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.441647 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:12:44.454998 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 14 00:12:44.481104 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 14 00:12:44.481308 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 14 00:12:44.482658 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 14 00:12:44.482835 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:12:44.492268 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:12:44.492316 kernel: GPT:9289727 != 167739391 Mar 14 00:12:44.492328 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:12:44.494943 kernel: GPT:9289727 != 167739391 Mar 14 00:12:44.497938 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:12:44.497956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:44.501967 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 14 00:12:44.535300 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (458) Mar 14 00:12:44.542655 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (445) Mar 14 00:12:44.545726 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:12:44.553579 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:12:44.560135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:12:44.564765 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:12:44.565566 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:12:44.573752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:12:44.587348 disk-uuid[569]: Primary Header is updated. Mar 14 00:12:44.587348 disk-uuid[569]: Secondary Entries is updated. Mar 14 00:12:44.587348 disk-uuid[569]: Secondary Header is updated. Mar 14 00:12:44.593655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:44.599654 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:45.603647 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:12:45.604949 disk-uuid[570]: The operation has completed successfully. Mar 14 00:12:45.657235 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:12:45.657368 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:12:45.666737 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:12:45.670071 sh[584]: Success Mar 14 00:12:45.683705 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:12:45.734058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:12:45.737733 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:12:45.738789 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:12:45.780747 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:12:45.780776 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:45.780788 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:12:45.786542 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:12:45.786574 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:12:45.797651 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:12:45.799346 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:12:45.800713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:12:45.806763 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:12:45.810781 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:12:45.829597 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:45.829647 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:45.829661 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:12:45.839093 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:12:45.839118 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:12:45.854988 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:45.854598 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:12:45.861854 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:12:45.870799 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:12:45.918425 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:45.925800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:12:45.951912 systemd-networkd[765]: lo: Link UP Mar 14 00:12:45.952670 ignition[704]: Ignition 2.19.0 Mar 14 00:12:45.951920 systemd-networkd[765]: lo: Gained carrier Mar 14 00:12:45.952677 ignition[704]: Stage: fetch-offline Mar 14 00:12:45.955040 systemd-networkd[765]: Enumeration completed Mar 14 00:12:45.952723 ignition[704]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:45.955763 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:12:45.952735 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:45.956500 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:45.952832 ignition[704]: parsed url from cmdline: "" Mar 14 00:12:45.956505 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:12:45.952836 ignition[704]: no config URL provided Mar 14 00:12:45.958009 systemd-networkd[765]: eth0: Link UP Mar 14 00:12:45.952842 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:12:45.958014 systemd-networkd[765]: eth0: Gained carrier Mar 14 00:12:45.952854 ignition[704]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:12:45.958021 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:45.952859 ignition[704]: failed to fetch config: resource requires networking Mar 14 00:12:45.959172 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:45.953516 ignition[704]: Ignition finished successfully Mar 14 00:12:45.960905 systemd[1]: Reached target network.target - Network. Mar 14 00:12:45.968815 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:12:45.980753 ignition[772]: Ignition 2.19.0 Mar 14 00:12:45.980764 ignition[772]: Stage: fetch Mar 14 00:12:45.980918 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:45.980930 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:45.981015 ignition[772]: parsed url from cmdline: "" Mar 14 00:12:45.981020 ignition[772]: no config URL provided Mar 14 00:12:45.981025 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:12:45.981035 ignition[772]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:12:45.981052 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 14 00:12:45.981231 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:12:46.181839 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 14 00:12:46.182075 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:12:46.582269 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 14 00:12:46.582458 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:12:46.626680 systemd-networkd[765]: eth0: DHCPv4 address 172.233.218.137/24, gateway 172.233.218.1 acquired from 23.40.197.110 Mar 14 00:12:47.383482 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 14 00:12:47.481595 ignition[772]: PUT result: OK Mar 14 00:12:47.481710 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 14 00:12:47.590369 ignition[772]: GET result: OK Mar 14 00:12:47.590484 ignition[772]: parsing config with SHA512: b1ad82fe78f0bdb7812e1ddc68e79d263f19660318f8547d2d93e2df27627d0bd04cffab8a9c87d8e4b8dfd6c3884cd8fc97d03c3091f8cb88ac863e6963936b Mar 14 00:12:47.594003 unknown[772]: fetched base config from "system" Mar 14 00:12:47.594922 ignition[772]: fetch: fetch complete Mar 14 00:12:47.594013 unknown[772]: fetched base config from "system" Mar 14 00:12:47.594928 ignition[772]: fetch: fetch passed Mar 14 00:12:47.594023 unknown[772]: fetched user config from "akamai" Mar 14 00:12:47.594970 ignition[772]: Ignition finished successfully Mar 14 00:12:47.599033 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:12:47.605782 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:12:47.618128 ignition[779]: Ignition 2.19.0 Mar 14 00:12:47.618143 ignition[779]: Stage: kargs Mar 14 00:12:47.618329 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:47.618341 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:47.621197 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:12:47.619727 ignition[779]: kargs: kargs passed Mar 14 00:12:47.619780 ignition[779]: Ignition finished successfully Mar 14 00:12:47.629847 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:12:47.645770 ignition[786]: Ignition 2.19.0 Mar 14 00:12:47.646722 ignition[786]: Stage: disks Mar 14 00:12:47.646881 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:47.646893 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:47.648963 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:12:47.647524 ignition[786]: disks: disks passed Mar 14 00:12:47.671191 systemd-networkd[765]: eth0: Gained IPv6LL Mar 14 00:12:47.647564 ignition[786]: Ignition finished successfully Mar 14 00:12:47.672211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:47.673832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:12:47.675351 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:12:47.676729 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:12:47.678099 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:12:47.685797 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:12:47.701452 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:12:47.706215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:12:47.713694 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:12:47.796653 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:12:47.797546 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:12:47.798811 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:12:47.805707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:47.807729 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:12:47.810492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:12:47.811720 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:12:47.811743 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:47.832212 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (802) Mar 14 00:12:47.832235 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:47.832249 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:47.832261 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:12:47.832280 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:12:47.832292 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:12:47.825357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:12:47.834450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:47.840775 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:12:47.885639 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:12:47.890728 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:12:47.895465 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:12:47.900695 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:12:47.983337 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:47.988710 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:12:47.990768 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:12:47.999061 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:12:48.002855 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:48.025246 ignition[915]: INFO : Ignition 2.19.0 Mar 14 00:12:48.028157 ignition[915]: INFO : Stage: mount Mar 14 00:12:48.028157 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:48.028157 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:48.028157 ignition[915]: INFO : mount: mount passed Mar 14 00:12:48.028157 ignition[915]: INFO : Ignition finished successfully Mar 14 00:12:48.027825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:12:48.029945 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:12:48.034732 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:12:48.804967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:48.819653 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (926) Mar 14 00:12:48.823706 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:12:48.823730 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:12:48.828006 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:12:48.834186 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:12:48.834215 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:12:48.837272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:48.860548 ignition[942]: INFO : Ignition 2.19.0 Mar 14 00:12:48.862424 ignition[942]: INFO : Stage: files Mar 14 00:12:48.862424 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:48.862424 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:48.862424 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:12:48.866075 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:12:48.866075 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:12:48.868116 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:12:48.869266 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:12:48.870315 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:12:48.869299 unknown[942]: wrote ssh authorized keys file for user: core Mar 14 00:12:48.872243 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:12:48.872243 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:12:49.104070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:12:49.202872 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:49.204973 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:12:49.685910 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:12:50.492418 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:12:50.492418 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:50.495836 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:50.495836 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:50.495836 ignition[942]: INFO : files: files passed Mar 14 00:12:50.495836 ignition[942]: INFO : Ignition finished successfully Mar 14 00:12:50.497085 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:12:50.527801 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:12:50.532696 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:12:50.537169 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:12:50.537279 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:12:50.548125 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:50.548125 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:50.551305 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:50.552719 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:50.554249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:12:50.560774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:12:50.591072 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:12:50.591190 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:12:50.592855 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:12:50.594312 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:12:50.595930 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:12:50.602773 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:12:50.614087 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:50.620754 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:12:50.629192 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:50.630053 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:50.631690 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:12:50.633353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:12:50.633450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:50.635421 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:12:50.636469 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:12:50.638093 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:12:50.639564 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:50.641026 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:50.642714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:12:50.644315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:50.645956 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:12:50.647514 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:12:50.649140 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:12:50.650665 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:12:50.650836 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:50.652537 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:50.653612 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:50.655073 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:12:50.655458 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:50.656754 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:12:50.656850 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:50.658989 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:12:50.659094 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:50.660191 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:12:50.660290 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:12:50.669756 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:12:50.671993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:12:50.672145 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:50.675810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:12:50.678677 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:12:50.679763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:50.683733 ignition[995]: INFO : Ignition 2.19.0 Mar 14 00:12:50.683733 ignition[995]: INFO : Stage: umount Mar 14 00:12:50.683733 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:50.683733 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 14 00:12:50.683733 ignition[995]: INFO : umount: umount passed Mar 14 00:12:50.683733 ignition[995]: INFO : Ignition finished successfully Mar 14 00:12:50.683224 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:12:50.683324 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:50.689673 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:12:50.689776 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:12:50.691186 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:12:50.691375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:12:50.693277 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:12:50.693327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:12:50.696190 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:12:50.696244 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:12:50.700858 systemd[1]: Stopped target network.target - Network. Mar 14 00:12:50.703691 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:12:50.703746 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:50.705010 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:12:50.705717 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:12:50.709912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:50.732972 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:12:50.734656 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:12:50.736247 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:12:50.736300 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:50.737904 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:12:50.737955 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:50.739562 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:12:50.739612 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:12:50.741183 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:12:50.741232 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:50.743109 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:12:50.744409 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:12:50.747297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:12:50.747949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:12:50.748052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:12:50.748674 systemd-networkd[765]: eth0: DHCPv6 lease lost Mar 14 00:12:50.751452 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:12:50.751562 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:12:50.753501 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:12:50.754981 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:12:50.756414 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:12:50.756596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:12:50.761399 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:12:50.761463 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:50.762995 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:12:50.763048 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:50.770713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:12:50.771450 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:12:50.771507 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:50.772345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:12:50.772400 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:50.773164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:12:50.773213 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:50.774682 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:12:50.774730 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:50.776399 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:50.789196 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:12:50.790110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:12:50.796294 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:12:50.796486 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:50.798172 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:12:50.798223 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:50.799575 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:12:50.799616 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:50.801140 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:12:50.801191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:50.803337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:12:50.803386 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:50.804933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:50.804981 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:50.811782 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:12:50.812854 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:12:50.812909 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:50.813702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:50.813752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:50.819589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:12:50.819816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:12:50.821743 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:12:50.826816 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:12:50.836228 systemd[1]: Switching root. Mar 14 00:12:50.866621 systemd-journald[178]: Journal stopped Mar 14 00:12:52.013075 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Mar 14 00:12:52.013104 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:12:52.013116 kernel: SELinux: policy capability open_perms=1 Mar 14 00:12:52.013126 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:12:52.013139 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:12:52.013148 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:12:52.013158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:12:52.013167 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:12:52.013177 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:12:52.013186 kernel: audit: type=1403 audit(1773447171.012:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:12:52.013196 systemd[1]: Successfully loaded SELinux policy in 54.397ms. Mar 14 00:12:52.013209 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.601ms. Mar 14 00:12:52.013220 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:12:52.013231 systemd[1]: Detected virtualization kvm. Mar 14 00:12:52.013242 systemd[1]: Detected architecture x86-64. Mar 14 00:12:52.013252 systemd[1]: Detected first boot. Mar 14 00:12:52.013265 systemd[1]: Initializing machine ID from random generator. Mar 14 00:12:52.013275 zram_generator::config[1039]: No configuration found. Mar 14 00:12:52.013285 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:12:52.013296 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:12:52.013305 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:12:52.013316 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:12:52.013327 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:12:52.013341 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:12:52.013351 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:12:52.013362 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:12:52.013372 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:12:52.013383 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:12:52.013394 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:12:52.013404 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:12:52.013417 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:52.013429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:52.013440 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:12:52.013451 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:12:52.013462 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:12:52.013473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:12:52.013483 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:12:52.013493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:52.013506 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:12:52.013517 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:12:52.013531 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:12:52.013542 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:12:52.013553 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:52.013564 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:12:52.013575 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:12:52.013586 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:12:52.013599 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:12:52.013610 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:12:52.013621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:52.013647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:52.013658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:52.013672 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:12:52.013684 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:12:52.013695 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:12:52.013706 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:12:52.013717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:52.013728 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:12:52.013739 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:12:52.013750 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:12:52.013764 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:12:52.013775 systemd[1]: Reached target machines.target - Containers. Mar 14 00:12:52.013785 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:12:52.013797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:52.013808 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:12:52.013819 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:12:52.013829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:52.013840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:12:52.013854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:52.013865 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:12:52.013875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:52.013886 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:12:52.013897 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:12:52.013908 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:12:52.013920 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:12:52.013931 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:12:52.013944 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:12:52.013955 kernel: loop: module loaded Mar 14 00:12:52.013965 kernel: fuse: init (API version 7.39) Mar 14 00:12:52.013975 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:12:52.013986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:12:52.013997 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:12:52.014008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:12:52.014019 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:12:52.014030 systemd[1]: Stopped verity-setup.service. Mar 14 00:12:52.014064 systemd-journald[1122]: Collecting audit messages is disabled. Mar 14 00:12:52.014085 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:52.014096 systemd-journald[1122]: Journal started Mar 14 00:12:52.014118 systemd-journald[1122]: Runtime Journal (/run/log/journal/b0777e9d2c57417ea89af690b14080a9) is 8.0M, max 78.3M, 70.3M free. Mar 14 00:12:52.016538 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:12:51.617133 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:12:51.633903 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:12:51.634360 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:12:52.026685 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:12:52.026715 kernel: ACPI: bus type drm_connector registered Mar 14 00:12:52.024282 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:12:52.027538 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:12:52.028700 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:12:52.029890 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:12:52.030966 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:12:52.032224 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:12:52.033343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:52.034490 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:12:52.034759 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:12:52.035901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:52.036120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:52.037250 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:12:52.037470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:12:52.038873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:52.039081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:52.040296 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:12:52.040535 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:12:52.041762 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:52.041970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:52.043074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:52.044320 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:12:52.045411 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:12:52.060401 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:12:52.068219 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:12:52.073714 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:12:52.075695 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:12:52.075780 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:12:52.078099 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:12:52.108754 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:12:52.113763 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:12:52.114806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:52.116606 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:12:52.120422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:12:52.122174 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:12:52.131403 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:12:52.133711 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:12:52.134724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:12:52.139752 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:12:52.152847 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:12:52.156605 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:12:52.160016 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:12:52.163134 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:12:52.181191 systemd-journald[1122]: Time spent on flushing to /var/log/journal/b0777e9d2c57417ea89af690b14080a9 is 47.009ms for 975 entries. Mar 14 00:12:52.181191 systemd-journald[1122]: System Journal (/var/log/journal/b0777e9d2c57417ea89af690b14080a9) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:12:52.271944 systemd-journald[1122]: Received client request to flush runtime journal. Mar 14 00:12:52.272047 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:12:52.272265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:12:52.191670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:52.220925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:52.230483 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:12:52.232567 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:12:52.235528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:12:52.246758 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:12:52.275800 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:12:52.285237 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:12:52.287540 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:12:52.297638 kernel: loop1: detected capacity change from 0 to 142488 Mar 14 00:12:52.301807 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:12:52.303375 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:12:52.312733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:12:52.341712 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Mar 14 00:12:52.342012 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Mar 14 00:12:52.344655 kernel: loop2: detected capacity change from 0 to 8 Mar 14 00:12:52.348492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:52.374965 kernel: loop3: detected capacity change from 0 to 217752 Mar 14 00:12:52.418308 kernel: loop4: detected capacity change from 0 to 140768 Mar 14 00:12:52.438644 kernel: loop5: detected capacity change from 0 to 142488 Mar 14 00:12:52.456645 kernel: loop6: detected capacity change from 0 to 8 Mar 14 00:12:52.460641 kernel: loop7: detected capacity change from 0 to 217752 Mar 14 00:12:52.479393 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 14 00:12:52.481814 (sd-merge)[1185]: Merged extensions into '/usr'. Mar 14 00:12:52.486929 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:12:52.486942 systemd[1]: Reloading... Mar 14 00:12:52.596657 zram_generator::config[1211]: No configuration found. Mar 14 00:12:52.705646 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:12:52.740142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:52.783698 systemd[1]: Reloading finished in 296 ms. Mar 14 00:12:52.808209 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:12:52.809559 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:12:52.810734 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:12:52.818768 systemd[1]: Starting ensure-sysext.service... Mar 14 00:12:52.822752 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:12:52.827187 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:52.834731 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:12:52.834740 systemd[1]: Reloading... Mar 14 00:12:52.865266 systemd-udevd[1257]: Using default interface naming scheme 'v255'. Mar 14 00:12:52.868015 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:12:52.868539 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:12:52.869498 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:12:52.873931 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 14 00:12:52.874018 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 14 00:12:52.878687 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:12:52.878701 systemd-tmpfiles[1256]: Skipping /boot Mar 14 00:12:52.893667 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:12:52.893680 systemd-tmpfiles[1256]: Skipping /boot Mar 14 00:12:52.934644 zram_generator::config[1283]: No configuration found. Mar 14 00:12:53.079274 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:12:53.115648 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:12:53.115919 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:12:53.116260 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:12:53.112566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:53.170647 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:12:53.171520 systemd[1]: Reloading finished in 336 ms. Mar 14 00:12:53.175647 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:12:53.175688 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:12:53.193532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:53.197433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:53.205060 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:12:53.216654 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:12:53.237860 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:12:53.249643 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1308) Mar 14 00:12:53.249848 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:12:53.259841 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:12:53.271833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:12:53.284719 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:12:53.295912 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:12:53.299483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:53.337914 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:12:53.340531 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:12:53.350429 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:12:53.358722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:12:53.360456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:53.360783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:53.366897 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:12:53.369717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:53.381935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:53.384758 lvm[1384]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:12:53.385655 augenrules[1388]: No rules Mar 14 00:12:53.391683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:53.393067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:53.396941 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:12:53.401186 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:12:53.411827 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:12:53.412565 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:53.414924 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:12:53.416330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:53.416518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:53.417961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:53.418139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:53.419353 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:53.419566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:53.427368 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:53.427564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:53.433895 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:53.439615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:12:53.442381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:53.448026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:53.448876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:53.448986 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:12:53.450834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:53.451016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:53.455858 systemd[1]: Finished ensure-sysext.service. Mar 14 00:12:53.458172 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:12:53.460776 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:12:53.466507 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:53.475794 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:12:53.486139 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:12:53.488771 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:12:53.490976 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:12:53.495833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:53.496009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:53.498354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:12:53.507072 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:12:53.508991 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:12:53.511158 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:53.511940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:53.513083 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:12:53.513242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:12:53.518466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:12:53.518516 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:12:53.523018 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:12:53.598246 systemd-networkd[1362]: lo: Link UP Mar 14 00:12:53.598258 systemd-networkd[1362]: lo: Gained carrier Mar 14 00:12:53.602126 systemd-networkd[1362]: Enumeration completed Mar 14 00:12:53.602229 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:12:53.604646 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:53.604661 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:12:53.605484 systemd-networkd[1362]: eth0: Link UP Mar 14 00:12:53.605496 systemd-networkd[1362]: eth0: Gained carrier Mar 14 00:12:53.605508 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:53.643567 systemd-resolved[1364]: Positive Trust Anchors: Mar 14 00:12:53.643884 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:12:53.643953 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:12:53.647708 systemd-resolved[1364]: Defaulting to hostname 'linux'. Mar 14 00:12:53.649216 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:12:53.650094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:12:53.651209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:53.652699 systemd[1]: Reached target network.target - Network. Mar 14 00:12:53.653411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:53.654199 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:12:53.655079 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:12:53.656036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:12:53.656859 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:12:53.657658 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:12:53.657692 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:12:53.658386 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:12:53.659324 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:12:53.660335 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:12:53.661119 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:12:53.662934 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:12:53.665237 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:12:53.670843 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:12:53.673381 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:12:53.674786 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:12:53.675614 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:12:53.676346 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:12:53.677245 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:12:53.677285 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:12:53.678723 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:12:53.689755 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:12:53.692932 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:12:53.696384 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:12:53.698768 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:12:53.701673 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:12:53.713326 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:12:53.716340 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:12:53.721978 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:12:53.726862 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:12:53.735817 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:12:53.737892 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:12:53.738437 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:12:53.740325 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:12:53.742261 jq[1435]: false Mar 14 00:12:53.744722 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:12:53.748620 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:12:53.748904 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:12:53.753172 jq[1447]: true Mar 14 00:12:53.770081 dbus-daemon[1434]: [system] SELinux support is enabled Mar 14 00:12:53.770259 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:12:53.778594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:12:53.779173 extend-filesystems[1436]: Found loop4 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found loop5 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found loop6 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found loop7 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda1 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda2 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda3 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found usr Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda4 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda6 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda7 Mar 14 00:12:53.783415 extend-filesystems[1436]: Found sda9 Mar 14 00:12:53.783415 extend-filesystems[1436]: Checking size of /dev/sda9 Mar 14 00:12:53.780702 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:12:53.781519 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:12:53.781536 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:12:53.807910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:12:53.808109 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:12:53.826688 jq[1453]: true Mar 14 00:12:53.831719 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:12:53.833262 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:12:53.834230 update_engine[1445]: I20260314 00:12:53.834151 1445 main.cc:92] Flatcar Update Engine starting Mar 14 00:12:53.834426 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:12:53.842674 tar[1449]: linux-amd64/LICENSE Mar 14 00:12:53.842674 tar[1449]: linux-amd64/helm Mar 14 00:12:53.841230 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:12:53.847456 update_engine[1445]: I20260314 00:12:53.845813 1445 update_check_scheduler.cc:74] Next update check in 10m59s Mar 14 00:12:53.847783 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:12:53.852267 extend-filesystems[1436]: Resized partition /dev/sda9 Mar 14 00:12:53.856003 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:12:53.869905 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 14 00:12:53.903355 coreos-metadata[1433]: Mar 14 00:12:53.902 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 14 00:12:53.953205 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:12:53.953233 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:12:53.954737 systemd-logind[1444]: New seat seat0. Mar 14 00:12:53.961241 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:12:53.973542 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:12:53.976866 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:12:53.992176 systemd[1]: Starting sshkeys.service... Mar 14 00:12:54.016643 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1308) Mar 14 00:12:54.030887 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:12:54.040788 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:12:54.189398 coreos-metadata[1501]: Mar 14 00:12:54.188 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 14 00:12:54.215643 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 14 00:12:54.217946 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:12:54.227249 extend-filesystems[1475]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:12:54.227249 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 14 00:12:54.227249 extend-filesystems[1475]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 14 00:12:54.233163 extend-filesystems[1436]: Resized filesystem in /dev/sda9 Mar 14 00:12:54.228928 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:12:54.229146 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:12:54.245319 containerd[1466]: time="2026-03-14T00:12:54.245248295Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:12:54.288110 containerd[1466]: time="2026-03-14T00:12:54.288077182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:54.290176 containerd[1466]: time="2026-03-14T00:12:54.290144780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:54.290176 containerd[1466]: time="2026-03-14T00:12:54.290172770Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:12:54.290255 containerd[1466]: time="2026-03-14T00:12:54.290188680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:12:54.290366 containerd[1466]: time="2026-03-14T00:12:54.290345190Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:12:54.290387 containerd[1466]: time="2026-03-14T00:12:54.290367900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:54.290451 containerd[1466]: time="2026-03-14T00:12:54.290432429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:54.290477 containerd[1466]: time="2026-03-14T00:12:54.290450349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:54.291182 containerd[1466]: time="2026-03-14T00:12:54.291159659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:54.291206 containerd[1466]: time="2026-03-14T00:12:54.291182719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:54.291206 containerd[1466]: time="2026-03-14T00:12:54.291195719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:54.291238 containerd[1466]: time="2026-03-14T00:12:54.291205599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:54.291319 containerd[1466]: time="2026-03-14T00:12:54.291300009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:54.291559 containerd[1466]: time="2026-03-14T00:12:54.291539218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:54.292740 containerd[1466]: time="2026-03-14T00:12:54.292718517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:54.292770 containerd[1466]: time="2026-03-14T00:12:54.292740157Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:12:54.292853 containerd[1466]: time="2026-03-14T00:12:54.292835507Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:12:54.292916 containerd[1466]: time="2026-03-14T00:12:54.292900167Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:12:54.296148 containerd[1466]: time="2026-03-14T00:12:54.296125324Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:12:54.296245 containerd[1466]: time="2026-03-14T00:12:54.296227914Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:12:54.296277 containerd[1466]: time="2026-03-14T00:12:54.296249434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:12:54.296553 containerd[1466]: time="2026-03-14T00:12:54.296533263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:12:54.296572 containerd[1466]: time="2026-03-14T00:12:54.296558523Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:12:54.296722 containerd[1466]: time="2026-03-14T00:12:54.296702703Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:12:54.296942 containerd[1466]: time="2026-03-14T00:12:54.296912563Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:12:54.297047 containerd[1466]: time="2026-03-14T00:12:54.297028953Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:12:54.297077 containerd[1466]: time="2026-03-14T00:12:54.297050593Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:12:54.297077 containerd[1466]: time="2026-03-14T00:12:54.297062923Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:12:54.297110 containerd[1466]: time="2026-03-14T00:12:54.297074863Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297110 containerd[1466]: time="2026-03-14T00:12:54.297086583Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297110 containerd[1466]: time="2026-03-14T00:12:54.297097483Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297110 containerd[1466]: time="2026-03-14T00:12:54.297108573Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297168 containerd[1466]: time="2026-03-14T00:12:54.297120283Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297168 containerd[1466]: time="2026-03-14T00:12:54.297130663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297168 containerd[1466]: time="2026-03-14T00:12:54.297141353Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297168 containerd[1466]: time="2026-03-14T00:12:54.297151283Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:12:54.297236 containerd[1466]: time="2026-03-14T00:12:54.297167823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297236 containerd[1466]: time="2026-03-14T00:12:54.297184663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297236 containerd[1466]: time="2026-03-14T00:12:54.297195543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297236 containerd[1466]: time="2026-03-14T00:12:54.297206423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297236 containerd[1466]: time="2026-03-14T00:12:54.297216823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297236 containerd[1466]: time="2026-03-14T00:12:54.297228473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297238653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297249323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297259913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297272753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297283463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297293983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297303953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297316143Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:12:54.297335 containerd[1466]: time="2026-03-14T00:12:54.297331753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297341433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297350793Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297396843Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297410842Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297420212Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297430852Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297439562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297449732Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297458712Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:12:54.297472 containerd[1466]: time="2026-03-14T00:12:54.297468942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:12:54.297760 containerd[1466]: time="2026-03-14T00:12:54.297685752Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:12:54.297760 containerd[1466]: time="2026-03-14T00:12:54.297743832Z" level=info msg="Connect containerd service" Mar 14 00:12:54.297912 containerd[1466]: time="2026-03-14T00:12:54.297772092Z" level=info msg="using legacy CRI server" Mar 14 00:12:54.297912 containerd[1466]: time="2026-03-14T00:12:54.297778912Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:12:54.297912 containerd[1466]: time="2026-03-14T00:12:54.297845512Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301302509Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301437788Z" level=info msg="Start subscribing containerd event" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301496918Z" level=info msg="Start recovering state" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301552948Z" level=info msg="Start event monitor" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301572578Z" level=info msg="Start snapshots syncer" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301580538Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301587878Z" level=info msg="Start streaming server" Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.301992068Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:12:54.303636 containerd[1466]: time="2026-03-14T00:12:54.302045548Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:12:54.302519 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:12:54.304027 systemd-networkd[1362]: eth0: DHCPv4 address 172.233.218.137/24, gateway 172.233.218.1 acquired from 23.40.197.110 Mar 14 00:12:54.304701 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1362 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:12:54.305781 containerd[1466]: time="2026-03-14T00:12:54.304578785Z" level=info msg="containerd successfully booted in 0.061167s" Mar 14 00:12:54.311754 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Mar 14 00:12:54.315881 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:12:54.445964 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:12:54.446494 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:12:54.446735 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1514 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:12:54.458960 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:12:54.471315 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:12:54.482187 polkitd[1516]: Started polkitd version 121 Mar 14 00:12:54.490045 polkitd[1516]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:12:54.490101 polkitd[1516]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:12:54.490604 polkitd[1516]: Finished loading, compiling and executing 2 rules Mar 14 00:12:54.492737 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:12:54.492868 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:12:54.494757 polkitd[1516]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:12:54.502151 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:12:54.513264 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:12:54.516897 systemd-hostnamed[1514]: Hostname set to <172-233-218-137> (transient) Mar 14 00:12:54.517027 systemd-resolved[1364]: System hostname changed to '172-233-218-137'. Mar 14 00:12:54.525578 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:12:54.525838 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:12:54.534549 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:12:54.567269 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:12:54.575664 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:12:54.581813 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:12:54.583227 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:12:54.603874 tar[1449]: linux-amd64/README.md Mar 14 00:12:54.623474 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:12:54.913001 coreos-metadata[1433]: Mar 14 00:12:54.912 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 14 00:12:55.006003 coreos-metadata[1433]: Mar 14 00:12:55.005 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 14 00:12:55.202506 coreos-metadata[1501]: Mar 14 00:12:55.202 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 14 00:12:55.232997 coreos-metadata[1433]: Mar 14 00:12:55.232 INFO Fetch successful Mar 14 00:12:55.232997 coreos-metadata[1433]: Mar 14 00:12:55.232 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 14 00:12:55.292157 coreos-metadata[1501]: Mar 14 00:12:55.292 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 14 00:12:55.427164 coreos-metadata[1501]: Mar 14 00:12:55.426 INFO Fetch successful Mar 14 00:12:55.442443 update-ssh-keys[1549]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:12:55.442907 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:12:55.445470 systemd[1]: Finished sshkeys.service. Mar 14 00:12:55.537914 systemd-networkd[1362]: eth0: Gained IPv6LL Mar 14 00:12:55.541262 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:12:55.542602 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:12:55.548778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:55.552831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:12:55.575140 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:12:55.633273 coreos-metadata[1433]: Mar 14 00:12:55.633 INFO Fetch successful Mar 14 00:12:56.438792 systemd-timesyncd[1413]: Contacted time server 155.248.196.28:123 (0.flatcar.pool.ntp.org). Mar 14 00:12:56.439072 systemd-resolved[1364]: Clock change detected. Flushing caches. Mar 14 00:12:56.439174 systemd-timesyncd[1413]: Initial clock synchronization to Sat 2026-03-14 00:12:56.438639 UTC. Mar 14 00:12:56.500298 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:12:56.501718 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:12:57.201431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:57.203102 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:12:57.205939 systemd[1]: Startup finished in 1.044s (kernel) + 8.276s (initrd) + 5.465s (userspace) = 14.787s. Mar 14 00:12:57.206384 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:57.625974 kubelet[1587]: E0314 00:12:57.625850 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:57.629177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:57.629384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:57.881595 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:12:57.886984 systemd[1]: Started sshd@0-172.233.218.137:22-4.153.228.146:43930.service - OpenSSH per-connection server daemon (4.153.228.146:43930). Mar 14 00:12:58.032981 sshd[1599]: Accepted publickey for core from 4.153.228.146 port 43930 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:12:58.035512 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:58.043880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:12:58.048744 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:12:58.050896 systemd-logind[1444]: New session 1 of user core. Mar 14 00:12:58.062534 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:12:58.069790 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:12:58.072763 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:12:58.168754 systemd[1603]: Queued start job for default target default.target. Mar 14 00:12:58.177785 systemd[1603]: Created slice app.slice - User Application Slice. Mar 14 00:12:58.177814 systemd[1603]: Reached target paths.target - Paths. Mar 14 00:12:58.177828 systemd[1603]: Reached target timers.target - Timers. Mar 14 00:12:58.179339 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:12:58.191952 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:12:58.192081 systemd[1603]: Reached target sockets.target - Sockets. Mar 14 00:12:58.192105 systemd[1603]: Reached target basic.target - Basic System. Mar 14 00:12:58.192144 systemd[1603]: Reached target default.target - Main User Target. Mar 14 00:12:58.192180 systemd[1603]: Startup finished in 113ms. Mar 14 00:12:58.192710 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:12:58.200672 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:12:58.340075 systemd[1]: Started sshd@1-172.233.218.137:22-4.153.228.146:43944.service - OpenSSH per-connection server daemon (4.153.228.146:43944). Mar 14 00:12:58.512999 sshd[1614]: Accepted publickey for core from 4.153.228.146 port 43944 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:12:58.514945 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:58.519043 systemd-logind[1444]: New session 2 of user core. Mar 14 00:12:58.525681 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:12:58.661288 sshd[1614]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:58.665197 systemd[1]: sshd@1-172.233.218.137:22-4.153.228.146:43944.service: Deactivated successfully. Mar 14 00:12:58.667338 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:12:58.669165 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:12:58.670613 systemd-logind[1444]: Removed session 2. Mar 14 00:12:58.696723 systemd[1]: Started sshd@2-172.233.218.137:22-4.153.228.146:43960.service - OpenSSH per-connection server daemon (4.153.228.146:43960). Mar 14 00:12:58.846646 sshd[1621]: Accepted publickey for core from 4.153.228.146 port 43960 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:12:58.847741 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:58.853403 systemd-logind[1444]: New session 3 of user core. Mar 14 00:12:58.859674 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:12:58.970885 sshd[1621]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:58.977195 systemd[1]: sshd@2-172.233.218.137:22-4.153.228.146:43960.service: Deactivated successfully. Mar 14 00:12:58.979951 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:12:58.981847 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:12:58.983088 systemd-logind[1444]: Removed session 3. Mar 14 00:12:58.995831 systemd[1]: Started sshd@3-172.233.218.137:22-4.153.228.146:41866.service - OpenSSH per-connection server daemon (4.153.228.146:41866). Mar 14 00:12:59.143298 sshd[1628]: Accepted publickey for core from 4.153.228.146 port 41866 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:12:59.145510 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:59.150024 systemd-logind[1444]: New session 4 of user core. Mar 14 00:12:59.156705 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:12:59.273581 sshd[1628]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:59.278482 systemd[1]: sshd@3-172.233.218.137:22-4.153.228.146:41866.service: Deactivated successfully. Mar 14 00:12:59.281385 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:12:59.282212 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:12:59.283770 systemd-logind[1444]: Removed session 4. Mar 14 00:12:59.318129 systemd[1]: Started sshd@4-172.233.218.137:22-4.153.228.146:41872.service - OpenSSH per-connection server daemon (4.153.228.146:41872). Mar 14 00:12:59.514541 sshd[1635]: Accepted publickey for core from 4.153.228.146 port 41872 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:12:59.515315 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:59.520271 systemd-logind[1444]: New session 5 of user core. Mar 14 00:12:59.525689 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:12:59.647269 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:12:59.647652 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:59.663138 sudo[1638]: pam_unix(sudo:session): session closed for user root Mar 14 00:12:59.690850 sshd[1635]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:59.694805 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:12:59.695274 systemd[1]: sshd@4-172.233.218.137:22-4.153.228.146:41872.service: Deactivated successfully. Mar 14 00:12:59.697331 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:12:59.698258 systemd-logind[1444]: Removed session 5. Mar 14 00:12:59.723484 systemd[1]: Started sshd@5-172.233.218.137:22-4.153.228.146:41880.service - OpenSSH per-connection server daemon (4.153.228.146:41880). Mar 14 00:12:59.899942 sshd[1643]: Accepted publickey for core from 4.153.228.146 port 41880 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:12:59.902752 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:59.908417 systemd-logind[1444]: New session 6 of user core. Mar 14 00:12:59.914717 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:13:00.024952 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:13:00.025325 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:00.029479 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:00.035877 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:13:00.036257 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:00.053850 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:00.056040 auditctl[1650]: No rules Mar 14 00:13:00.056660 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:13:00.056902 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:00.060384 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:00.100158 augenrules[1668]: No rules Mar 14 00:13:00.101955 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:00.103549 sudo[1646]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:00.130286 sshd[1643]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:00.134355 systemd[1]: sshd@5-172.233.218.137:22-4.153.228.146:41880.service: Deactivated successfully. Mar 14 00:13:00.137045 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:13:00.138728 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:13:00.140424 systemd-logind[1444]: Removed session 6. Mar 14 00:13:00.162289 systemd[1]: Started sshd@6-172.233.218.137:22-4.153.228.146:41884.service - OpenSSH per-connection server daemon (4.153.228.146:41884). Mar 14 00:13:00.324591 sshd[1676]: Accepted publickey for core from 4.153.228.146 port 41884 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:13:00.325539 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:00.332783 systemd-logind[1444]: New session 7 of user core. Mar 14 00:13:00.342704 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:13:00.443886 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:13:00.444274 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:00.707769 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:13:00.717865 (dockerd)[1696]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:13:00.969981 dockerd[1696]: time="2026-03-14T00:13:00.969834564Z" level=info msg="Starting up" Mar 14 00:13:01.100123 dockerd[1696]: time="2026-03-14T00:13:01.100090774Z" level=info msg="Loading containers: start." Mar 14 00:13:01.205715 kernel: Initializing XFRM netlink socket Mar 14 00:13:01.289617 systemd-networkd[1362]: docker0: Link UP Mar 14 00:13:01.302141 dockerd[1696]: time="2026-03-14T00:13:01.302104672Z" level=info msg="Loading containers: done." Mar 14 00:13:01.316872 dockerd[1696]: time="2026-03-14T00:13:01.316833857Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:13:01.317010 dockerd[1696]: time="2026-03-14T00:13:01.316950757Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:13:01.317075 dockerd[1696]: time="2026-03-14T00:13:01.317051697Z" level=info msg="Daemon has completed initialization" Mar 14 00:13:01.342918 dockerd[1696]: time="2026-03-14T00:13:01.342805961Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:13:01.342983 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:13:02.121596 containerd[1466]: time="2026-03-14T00:13:02.121500462Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:13:02.710155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518401269.mount: Deactivated successfully. Mar 14 00:13:03.992987 containerd[1466]: time="2026-03-14T00:13:03.992920231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:03.994260 containerd[1466]: time="2026-03-14T00:13:03.994205790Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696473" Mar 14 00:13:03.996587 containerd[1466]: time="2026-03-14T00:13:03.994714149Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:03.998581 containerd[1466]: time="2026-03-14T00:13:03.998262806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:03.999582 containerd[1466]: time="2026-03-14T00:13:03.999323934Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 1.877762412s" Mar 14 00:13:03.999582 containerd[1466]: time="2026-03-14T00:13:03.999355684Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:13:04.000085 containerd[1466]: time="2026-03-14T00:13:04.000065574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:13:05.132320 containerd[1466]: time="2026-03-14T00:13:05.132279141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:05.133550 containerd[1466]: time="2026-03-14T00:13:05.133253691Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450706" Mar 14 00:13:05.135575 containerd[1466]: time="2026-03-14T00:13:05.135386428Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:05.139129 containerd[1466]: time="2026-03-14T00:13:05.139105675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:05.139889 containerd[1466]: time="2026-03-14T00:13:05.139857594Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.13959498s" Mar 14 00:13:05.139941 containerd[1466]: time="2026-03-14T00:13:05.139888814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:13:05.140684 containerd[1466]: time="2026-03-14T00:13:05.140273413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:13:06.132613 containerd[1466]: time="2026-03-14T00:13:06.131394412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:06.132613 containerd[1466]: time="2026-03-14T00:13:06.132302081Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548435" Mar 14 00:13:06.132613 containerd[1466]: time="2026-03-14T00:13:06.132320631Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:06.135140 containerd[1466]: time="2026-03-14T00:13:06.134896619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:06.136427 containerd[1466]: time="2026-03-14T00:13:06.135834798Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 995.535025ms" Mar 14 00:13:06.136427 containerd[1466]: time="2026-03-14T00:13:06.135862248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:13:06.136543 containerd[1466]: time="2026-03-14T00:13:06.136521777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:13:07.178071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855325678.mount: Deactivated successfully. Mar 14 00:13:07.433333 containerd[1466]: time="2026-03-14T00:13:07.432995201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:07.434214 containerd[1466]: time="2026-03-14T00:13:07.434037430Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685318" Mar 14 00:13:07.434986 containerd[1466]: time="2026-03-14T00:13:07.434776099Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:07.436865 containerd[1466]: time="2026-03-14T00:13:07.436830867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:07.437481 containerd[1466]: time="2026-03-14T00:13:07.437443936Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.300895169s" Mar 14 00:13:07.437481 containerd[1466]: time="2026-03-14T00:13:07.437480146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:13:07.438343 containerd[1466]: time="2026-03-14T00:13:07.437963456Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:13:07.879693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:07.887721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:07.989374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825831302.mount: Deactivated successfully. Mar 14 00:13:08.075734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:08.085877 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:08.139477 kubelet[1927]: E0314 00:13:08.139247 1927 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:08.146705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:08.147266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:08.809212 containerd[1466]: time="2026-03-14T00:13:08.809162695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:08.811290 containerd[1466]: time="2026-03-14T00:13:08.810039354Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556548" Mar 14 00:13:08.811290 containerd[1466]: time="2026-03-14T00:13:08.811242542Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:08.814608 containerd[1466]: time="2026-03-14T00:13:08.814539659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:08.815668 containerd[1466]: time="2026-03-14T00:13:08.815637498Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.377643452s" Mar 14 00:13:08.815728 containerd[1466]: time="2026-03-14T00:13:08.815668338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:13:08.816767 containerd[1466]: time="2026-03-14T00:13:08.816119868Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:13:09.294955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100661543.mount: Deactivated successfully. Mar 14 00:13:09.298553 containerd[1466]: time="2026-03-14T00:13:09.298501645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:09.299532 containerd[1466]: time="2026-03-14T00:13:09.299484864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Mar 14 00:13:09.300350 containerd[1466]: time="2026-03-14T00:13:09.300307363Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:09.302905 containerd[1466]: time="2026-03-14T00:13:09.302876931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:09.304200 containerd[1466]: time="2026-03-14T00:13:09.303789820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 487.639092ms" Mar 14 00:13:09.304200 containerd[1466]: time="2026-03-14T00:13:09.303823220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:13:09.304370 containerd[1466]: time="2026-03-14T00:13:09.304342579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:13:09.831891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055436717.mount: Deactivated successfully. Mar 14 00:13:10.426978 containerd[1466]: time="2026-03-14T00:13:10.426928077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:10.428220 containerd[1466]: time="2026-03-14T00:13:10.428187946Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630328" Mar 14 00:13:10.428795 containerd[1466]: time="2026-03-14T00:13:10.428749995Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:10.431852 containerd[1466]: time="2026-03-14T00:13:10.431830902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:10.433287 containerd[1466]: time="2026-03-14T00:13:10.432770711Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.128399482s" Mar 14 00:13:10.433287 containerd[1466]: time="2026-03-14T00:13:10.432797321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:13:11.484454 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:11.491739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:11.522056 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-7.scope)... Mar 14 00:13:11.522071 systemd[1]: Reloading... Mar 14 00:13:11.659607 zram_generator::config[2112]: No configuration found. Mar 14 00:13:11.776887 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:11.851649 systemd[1]: Reloading finished in 329 ms. Mar 14 00:13:11.897924 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:13:11.898019 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:13:11.898287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:11.900810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:12.072416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:12.077503 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:13:12.110295 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:12.398661 kubelet[2165]: I0314 00:13:12.397477 2165 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:13:12.398661 kubelet[2165]: I0314 00:13:12.397520 2165 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:13:12.399784 kubelet[2165]: I0314 00:13:12.399765 2165 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:13:12.399784 kubelet[2165]: I0314 00:13:12.399783 2165 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:13:12.400169 kubelet[2165]: I0314 00:13:12.400152 2165 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:13:12.404484 kubelet[2165]: E0314 00:13:12.404456 2165 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.233.218.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.218.137:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:13:12.404839 kubelet[2165]: I0314 00:13:12.404812 2165 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:13:12.408180 kubelet[2165]: E0314 00:13:12.408141 2165 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:13:12.408232 kubelet[2165]: I0314 00:13:12.408190 2165 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:13:12.412267 kubelet[2165]: I0314 00:13:12.412249 2165 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:13:12.413385 kubelet[2165]: I0314 00:13:12.413345 2165 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:13:12.413511 kubelet[2165]: I0314 00:13:12.413376 2165 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-218-137","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:13:12.413511 kubelet[2165]: I0314 00:13:12.413508 2165 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:13:12.413640 kubelet[2165]: I0314 00:13:12.413517 2165 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:13:12.413640 kubelet[2165]: I0314 00:13:12.413615 2165 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:13:12.414731 kubelet[2165]: I0314 00:13:12.414717 2165 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:13:12.414880 kubelet[2165]: I0314 00:13:12.414869 2165 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:13:12.414935 kubelet[2165]: I0314 00:13:12.414894 2165 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:13:12.414935 kubelet[2165]: I0314 00:13:12.414916 2165 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:13:12.414935 kubelet[2165]: I0314 00:13:12.414928 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:13:12.417267 kubelet[2165]: I0314 00:13:12.416708 2165 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:13:12.418874 kubelet[2165]: I0314 00:13:12.418504 2165 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:13:12.418874 kubelet[2165]: I0314 00:13:12.418532 2165 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:13:12.418874 kubelet[2165]: W0314 00:13:12.418617 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:13:12.421040 kubelet[2165]: I0314 00:13:12.421027 2165 server.go:1257] "Started kubelet" Mar 14 00:13:12.429640 kubelet[2165]: I0314 00:13:12.429613 2165 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:13:12.432335 kubelet[2165]: I0314 00:13:12.431639 2165 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:13:12.432335 kubelet[2165]: I0314 00:13:12.431762 2165 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:13:12.435188 kubelet[2165]: I0314 00:13:12.435143 2165 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:13:12.435279 kubelet[2165]: I0314 00:13:12.435265 2165 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:13:12.435472 kubelet[2165]: I0314 00:13:12.435459 2165 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:13:12.436870 kubelet[2165]: E0314 00:13:12.435769 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.218.137:6443/api/v1/namespaces/default/events\": dial tcp 172.233.218.137:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-218-137.189c8cd34c2b0ee5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-218-137,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-218-137,},FirstTimestamp:2026-03-14 00:13:12.421011173 +0000 UTC m=+0.340125121,LastTimestamp:2026-03-14 00:13:12.421011173 +0000 UTC m=+0.340125121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-218-137,}" Mar 14 00:13:12.437779 kubelet[2165]: I0314 00:13:12.437452 2165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:13:12.440905 kubelet[2165]: E0314 00:13:12.440888 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:12.441410 kubelet[2165]: I0314 00:13:12.441396 2165 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:13:12.441675 kubelet[2165]: I0314 00:13:12.441659 2165 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:13:12.441791 kubelet[2165]: I0314 00:13:12.441776 2165 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:13:12.442374 kubelet[2165]: E0314 00:13:12.442352 2165 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.218.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-218-137?timeout=10s\": dial tcp 172.233.218.137:6443: connect: connection refused" interval="200ms" Mar 14 00:13:12.442622 kubelet[2165]: E0314 00:13:12.442606 2165 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:13:12.442821 kubelet[2165]: I0314 00:13:12.442808 2165 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:13:12.442938 kubelet[2165]: I0314 00:13:12.442922 2165 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:13:12.444421 kubelet[2165]: I0314 00:13:12.444394 2165 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:13:12.455308 kubelet[2165]: I0314 00:13:12.455199 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:13:12.456586 kubelet[2165]: I0314 00:13:12.456432 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:13:12.456586 kubelet[2165]: I0314 00:13:12.456446 2165 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:13:12.456586 kubelet[2165]: I0314 00:13:12.456464 2165 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:13:12.456586 kubelet[2165]: E0314 00:13:12.456510 2165 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:13:12.478113 kubelet[2165]: I0314 00:13:12.478087 2165 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:13:12.478113 kubelet[2165]: I0314 00:13:12.478104 2165 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:13:12.478206 kubelet[2165]: I0314 00:13:12.478122 2165 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:13:12.479366 kubelet[2165]: I0314 00:13:12.479346 2165 policy_none.go:50] "Start" Mar 14 00:13:12.479416 kubelet[2165]: I0314 00:13:12.479369 2165 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:13:12.479416 kubelet[2165]: I0314 00:13:12.479385 2165 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:13:12.480198 kubelet[2165]: I0314 00:13:12.480186 2165 policy_none.go:44] "Start" Mar 14 00:13:12.484173 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:13:12.495621 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:13:12.499801 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:13:12.513490 kubelet[2165]: E0314 00:13:12.512598 2165 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:13:12.513490 kubelet[2165]: I0314 00:13:12.512763 2165 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:13:12.513490 kubelet[2165]: I0314 00:13:12.512773 2165 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:13:12.513490 kubelet[2165]: I0314 00:13:12.513265 2165 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:13:12.515164 kubelet[2165]: E0314 00:13:12.515150 2165 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:13:12.515255 kubelet[2165]: E0314 00:13:12.515244 2165 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-218-137\" not found" Mar 14 00:13:12.567461 systemd[1]: Created slice kubepods-burstable-pod1399db402615b023c182e8e89c95e2e9.slice - libcontainer container kubepods-burstable-pod1399db402615b023c182e8e89c95e2e9.slice. Mar 14 00:13:12.584358 kubelet[2165]: E0314 00:13:12.584318 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:12.588093 systemd[1]: Created slice kubepods-burstable-podf05b07907eff29c7aa364b22f6e1bcc0.slice - libcontainer container kubepods-burstable-podf05b07907eff29c7aa364b22f6e1bcc0.slice. Mar 14 00:13:12.589793 kubelet[2165]: E0314 00:13:12.589767 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:12.602827 systemd[1]: Created slice kubepods-burstable-poda9ed4c6481d908da4d119532ab647c54.slice - libcontainer container kubepods-burstable-poda9ed4c6481d908da4d119532ab647c54.slice. Mar 14 00:13:12.606216 kubelet[2165]: E0314 00:13:12.606191 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:12.614590 kubelet[2165]: I0314 00:13:12.614555 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-233-218-137" Mar 14 00:13:12.614851 kubelet[2165]: E0314 00:13:12.614830 2165 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.233.218.137:6443/api/v1/nodes\": dial tcp 172.233.218.137:6443: connect: connection refused" node="172-233-218-137" Mar 14 00:13:12.642882 kubelet[2165]: I0314 00:13:12.642852 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1399db402615b023c182e8e89c95e2e9-k8s-certs\") pod \"kube-apiserver-172-233-218-137\" (UID: \"1399db402615b023c182e8e89c95e2e9\") " pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:12.643877 kubelet[2165]: I0314 00:13:12.643502 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1399db402615b023c182e8e89c95e2e9-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-218-137\" (UID: \"1399db402615b023c182e8e89c95e2e9\") " pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:12.643877 kubelet[2165]: I0314 00:13:12.643525 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-ca-certs\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:12.643877 kubelet[2165]: I0314 00:13:12.643554 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-k8s-certs\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:12.643877 kubelet[2165]: I0314 00:13:12.643608 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:12.643877 kubelet[2165]: I0314 00:13:12.643631 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9ed4c6481d908da4d119532ab647c54-kubeconfig\") pod \"kube-scheduler-172-233-218-137\" (UID: \"a9ed4c6481d908da4d119532ab647c54\") " pod="kube-system/kube-scheduler-172-233-218-137" Mar 14 00:13:12.644161 kubelet[2165]: I0314 00:13:12.643653 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-flexvolume-dir\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:12.644161 kubelet[2165]: I0314 00:13:12.643689 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-kubeconfig\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:12.644161 kubelet[2165]: I0314 00:13:12.643718 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1399db402615b023c182e8e89c95e2e9-ca-certs\") pod \"kube-apiserver-172-233-218-137\" (UID: \"1399db402615b023c182e8e89c95e2e9\") " pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:12.644161 kubelet[2165]: E0314 00:13:12.643446 2165 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.218.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-218-137?timeout=10s\": dial tcp 172.233.218.137:6443: connect: connection refused" interval="400ms" Mar 14 00:13:12.817280 kubelet[2165]: I0314 00:13:12.817189 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-233-218-137" Mar 14 00:13:12.817588 kubelet[2165]: E0314 00:13:12.817448 2165 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.233.218.137:6443/api/v1/nodes\": dial tcp 172.233.218.137:6443: connect: connection refused" node="172-233-218-137" Mar 14 00:13:12.886542 kubelet[2165]: E0314 00:13:12.886511 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:12.887271 containerd[1466]: time="2026-03-14T00:13:12.887242106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-218-137,Uid:1399db402615b023c182e8e89c95e2e9,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:12.891240 kubelet[2165]: E0314 00:13:12.891223 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:12.893746 containerd[1466]: time="2026-03-14T00:13:12.893718520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-218-137,Uid:f05b07907eff29c7aa364b22f6e1bcc0,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:12.908322 kubelet[2165]: E0314 00:13:12.908300 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:12.908795 containerd[1466]: time="2026-03-14T00:13:12.908601655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-218-137,Uid:a9ed4c6481d908da4d119532ab647c54,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:13.044585 kubelet[2165]: E0314 00:13:13.044502 2165 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.218.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-218-137?timeout=10s\": dial tcp 172.233.218.137:6443: connect: connection refused" interval="800ms" Mar 14 00:13:13.219602 kubelet[2165]: I0314 00:13:13.219497 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-233-218-137" Mar 14 00:13:13.219957 kubelet[2165]: E0314 00:13:13.219769 2165 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.233.218.137:6443/api/v1/nodes\": dial tcp 172.233.218.137:6443: connect: connection refused" node="172-233-218-137" Mar 14 00:13:13.345275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233050634.mount: Deactivated successfully. Mar 14 00:13:13.349704 containerd[1466]: time="2026-03-14T00:13:13.349647384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:13.350979 containerd[1466]: time="2026-03-14T00:13:13.350945193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:13.352527 containerd[1466]: time="2026-03-14T00:13:13.352469821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Mar 14 00:13:13.352875 containerd[1466]: time="2026-03-14T00:13:13.352832701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:13.353500 containerd[1466]: time="2026-03-14T00:13:13.353467320Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:13.353885 containerd[1466]: time="2026-03-14T00:13:13.353799170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:13:13.354382 containerd[1466]: time="2026-03-14T00:13:13.354351609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:13:13.355761 containerd[1466]: time="2026-03-14T00:13:13.355724418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:13.357588 containerd[1466]: time="2026-03-14T00:13:13.357023227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 463.248517ms" Mar 14 00:13:13.359475 containerd[1466]: time="2026-03-14T00:13:13.359241074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.932578ms" Mar 14 00:13:13.361224 containerd[1466]: time="2026-03-14T00:13:13.361199682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.553217ms" Mar 14 00:13:13.466010 containerd[1466]: time="2026-03-14T00:13:13.465911868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:13.466010 containerd[1466]: time="2026-03-14T00:13:13.465958918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:13.466010 containerd[1466]: time="2026-03-14T00:13:13.465972368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:13.466726 containerd[1466]: time="2026-03-14T00:13:13.466242857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:13.468770 containerd[1466]: time="2026-03-14T00:13:13.467765556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:13.468770 containerd[1466]: time="2026-03-14T00:13:13.467807236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:13.468770 containerd[1466]: time="2026-03-14T00:13:13.467821046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:13.468770 containerd[1466]: time="2026-03-14T00:13:13.467882086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:13.470022 containerd[1466]: time="2026-03-14T00:13:13.469760174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:13.470022 containerd[1466]: time="2026-03-14T00:13:13.469815384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:13.470022 containerd[1466]: time="2026-03-14T00:13:13.469829374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:13.470022 containerd[1466]: time="2026-03-14T00:13:13.469889664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:13.502714 systemd[1]: Started cri-containerd-5781f816f567357e933e1ff025d482c1a14a87f623594786e602b4e94a46bf8f.scope - libcontainer container 5781f816f567357e933e1ff025d482c1a14a87f623594786e602b4e94a46bf8f. Mar 14 00:13:13.509976 systemd[1]: Started cri-containerd-0cf4dac7909814f82d0166d2278ed5b61326275414ff3fe6c843d79fd5d7297e.scope - libcontainer container 0cf4dac7909814f82d0166d2278ed5b61326275414ff3fe6c843d79fd5d7297e. Mar 14 00:13:13.529920 systemd[1]: Started cri-containerd-8172e456e0a1e1b8a9088b0e4ec872092c7ce21d6fe28a2f0abe3f103b1465d1.scope - libcontainer container 8172e456e0a1e1b8a9088b0e4ec872092c7ce21d6fe28a2f0abe3f103b1465d1. Mar 14 00:13:13.571217 containerd[1466]: time="2026-03-14T00:13:13.571159723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-218-137,Uid:f05b07907eff29c7aa364b22f6e1bcc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf4dac7909814f82d0166d2278ed5b61326275414ff3fe6c843d79fd5d7297e\"" Mar 14 00:13:13.573570 kubelet[2165]: E0314 00:13:13.572699 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:13.581218 containerd[1466]: time="2026-03-14T00:13:13.581187182Z" level=info msg="CreateContainer within sandbox \"0cf4dac7909814f82d0166d2278ed5b61326275414ff3fe6c843d79fd5d7297e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:13:13.604981 containerd[1466]: time="2026-03-14T00:13:13.604931429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-218-137,Uid:1399db402615b023c182e8e89c95e2e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5781f816f567357e933e1ff025d482c1a14a87f623594786e602b4e94a46bf8f\"" Mar 14 00:13:13.606865 containerd[1466]: time="2026-03-14T00:13:13.606842617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-218-137,Uid:a9ed4c6481d908da4d119532ab647c54,Namespace:kube-system,Attempt:0,} returns sandbox id \"8172e456e0a1e1b8a9088b0e4ec872092c7ce21d6fe28a2f0abe3f103b1465d1\"" Mar 14 00:13:13.607776 containerd[1466]: time="2026-03-14T00:13:13.607742636Z" level=info msg="CreateContainer within sandbox \"0cf4dac7909814f82d0166d2278ed5b61326275414ff3fe6c843d79fd5d7297e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61e515c59a0e75ccc868633b92eacc53a0c083d5295c73baa700f14d073e2233\"" Mar 14 00:13:13.608577 kubelet[2165]: E0314 00:13:13.608545 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:13.609553 kubelet[2165]: E0314 00:13:13.609281 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:13.610270 containerd[1466]: time="2026-03-14T00:13:13.610249533Z" level=info msg="StartContainer for \"61e515c59a0e75ccc868633b92eacc53a0c083d5295c73baa700f14d073e2233\"" Mar 14 00:13:13.617047 containerd[1466]: time="2026-03-14T00:13:13.617025577Z" level=info msg="CreateContainer within sandbox \"5781f816f567357e933e1ff025d482c1a14a87f623594786e602b4e94a46bf8f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:13:13.617765 containerd[1466]: time="2026-03-14T00:13:13.617745496Z" level=info msg="CreateContainer within sandbox \"8172e456e0a1e1b8a9088b0e4ec872092c7ce21d6fe28a2f0abe3f103b1465d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:13:13.633307 containerd[1466]: time="2026-03-14T00:13:13.633282990Z" level=info msg="CreateContainer within sandbox \"5781f816f567357e933e1ff025d482c1a14a87f623594786e602b4e94a46bf8f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e7d8ee92e961bc4ef0784a0c0bbe182d08660b34b9db2aae19ec5df57578c6fe\"" Mar 14 00:13:13.633913 containerd[1466]: time="2026-03-14T00:13:13.633888200Z" level=info msg="StartContainer for \"e7d8ee92e961bc4ef0784a0c0bbe182d08660b34b9db2aae19ec5df57578c6fe\"" Mar 14 00:13:13.634607 containerd[1466]: time="2026-03-14T00:13:13.634532979Z" level=info msg="CreateContainer within sandbox \"8172e456e0a1e1b8a9088b0e4ec872092c7ce21d6fe28a2f0abe3f103b1465d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9832002c2a45f79c7f2b2d3b9284d1d1683ce82c914334a8678476d5273e6ff6\"" Mar 14 00:13:13.634940 containerd[1466]: time="2026-03-14T00:13:13.634891629Z" level=info msg="StartContainer for \"9832002c2a45f79c7f2b2d3b9284d1d1683ce82c914334a8678476d5273e6ff6\"" Mar 14 00:13:13.666677 systemd[1]: Started cri-containerd-61e515c59a0e75ccc868633b92eacc53a0c083d5295c73baa700f14d073e2233.scope - libcontainer container 61e515c59a0e75ccc868633b92eacc53a0c083d5295c73baa700f14d073e2233. Mar 14 00:13:13.682708 systemd[1]: Started cri-containerd-e7d8ee92e961bc4ef0784a0c0bbe182d08660b34b9db2aae19ec5df57578c6fe.scope - libcontainer container e7d8ee92e961bc4ef0784a0c0bbe182d08660b34b9db2aae19ec5df57578c6fe. Mar 14 00:13:13.706698 systemd[1]: Started cri-containerd-9832002c2a45f79c7f2b2d3b9284d1d1683ce82c914334a8678476d5273e6ff6.scope - libcontainer container 9832002c2a45f79c7f2b2d3b9284d1d1683ce82c914334a8678476d5273e6ff6. Mar 14 00:13:13.753494 containerd[1466]: time="2026-03-14T00:13:13.753392070Z" level=info msg="StartContainer for \"e7d8ee92e961bc4ef0784a0c0bbe182d08660b34b9db2aae19ec5df57578c6fe\" returns successfully" Mar 14 00:13:13.764712 containerd[1466]: time="2026-03-14T00:13:13.764681479Z" level=info msg="StartContainer for \"61e515c59a0e75ccc868633b92eacc53a0c083d5295c73baa700f14d073e2233\" returns successfully" Mar 14 00:13:13.810118 containerd[1466]: time="2026-03-14T00:13:13.810078264Z" level=info msg="StartContainer for \"9832002c2a45f79c7f2b2d3b9284d1d1683ce82c914334a8678476d5273e6ff6\" returns successfully" Mar 14 00:13:14.021929 kubelet[2165]: I0314 00:13:14.021573 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-233-218-137" Mar 14 00:13:14.483131 kubelet[2165]: E0314 00:13:14.483020 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:14.484862 kubelet[2165]: E0314 00:13:14.483624 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:14.484862 kubelet[2165]: E0314 00:13:14.483723 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:14.484862 kubelet[2165]: E0314 00:13:14.483640 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:14.484862 kubelet[2165]: E0314 00:13:14.484404 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:14.484862 kubelet[2165]: E0314 00:13:14.484481 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:14.607424 kubelet[2165]: E0314 00:13:14.607363 2165 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:14.663021 kubelet[2165]: I0314 00:13:14.662605 2165 kubelet_node_status.go:77] "Successfully registered node" node="172-233-218-137" Mar 14 00:13:14.663021 kubelet[2165]: E0314 00:13:14.662635 2165 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"172-233-218-137\": node \"172-233-218-137\" not found" Mar 14 00:13:14.669363 kubelet[2165]: E0314 00:13:14.669331 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:14.769580 kubelet[2165]: E0314 00:13:14.769533 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:14.869689 kubelet[2165]: E0314 00:13:14.869652 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:14.969757 kubelet[2165]: E0314 00:13:14.969722 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.070795 kubelet[2165]: E0314 00:13:15.070505 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.171184 kubelet[2165]: E0314 00:13:15.171159 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.271780 kubelet[2165]: E0314 00:13:15.271747 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.372799 kubelet[2165]: E0314 00:13:15.372702 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.473179 kubelet[2165]: E0314 00:13:15.473140 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.485587 kubelet[2165]: E0314 00:13:15.485551 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:15.485906 kubelet[2165]: E0314 00:13:15.485681 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:15.485906 kubelet[2165]: E0314 00:13:15.485868 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:15.485947 kubelet[2165]: E0314 00:13:15.485938 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:15.573822 kubelet[2165]: E0314 00:13:15.573765 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.586958 kubelet[2165]: E0314 00:13:15.586933 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:15.587078 kubelet[2165]: E0314 00:13:15.587060 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:15.674983 kubelet[2165]: E0314 00:13:15.674690 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.775318 kubelet[2165]: E0314 00:13:15.775242 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.875883 kubelet[2165]: E0314 00:13:15.875842 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:15.976767 kubelet[2165]: E0314 00:13:15.976623 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:16.077683 kubelet[2165]: E0314 00:13:16.077640 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:16.178493 kubelet[2165]: E0314 00:13:16.178437 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:16.279016 kubelet[2165]: E0314 00:13:16.278981 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:16.379771 kubelet[2165]: E0314 00:13:16.379721 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:16.480164 kubelet[2165]: E0314 00:13:16.480100 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-233-218-137\" not found" Mar 14 00:13:16.487750 kubelet[2165]: E0314 00:13:16.487520 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:16.487750 kubelet[2165]: E0314 00:13:16.487633 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-218-137\" not found" node="172-233-218-137" Mar 14 00:13:16.487750 kubelet[2165]: E0314 00:13:16.487656 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:16.487750 kubelet[2165]: E0314 00:13:16.487731 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:16.529843 systemd[1]: Reloading requested from client PID 2439 ('systemctl') (unit session-7.scope)... Mar 14 00:13:16.529861 systemd[1]: Reloading... Mar 14 00:13:16.643587 kubelet[2165]: I0314 00:13:16.642707 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:16.644594 zram_generator::config[2482]: No configuration found. Mar 14 00:13:16.650947 kubelet[2165]: I0314 00:13:16.650870 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:16.655752 kubelet[2165]: I0314 00:13:16.655678 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-218-137" Mar 14 00:13:16.757959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:16.846974 systemd[1]: Reloading finished in 316 ms. Mar 14 00:13:16.897602 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:16.916227 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:13:16.916468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:16.922903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:17.106827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:17.119913 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:13:17.164590 kubelet[2530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:17.171456 kubelet[2530]: I0314 00:13:17.171420 2530 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:13:17.171578 kubelet[2530]: I0314 00:13:17.171546 2530 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:13:17.171647 kubelet[2530]: I0314 00:13:17.171637 2530 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:13:17.171702 kubelet[2530]: I0314 00:13:17.171691 2530 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:13:17.171958 kubelet[2530]: I0314 00:13:17.171945 2530 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:13:17.173070 kubelet[2530]: I0314 00:13:17.173048 2530 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:13:17.175616 kubelet[2530]: I0314 00:13:17.175589 2530 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:13:17.181947 kubelet[2530]: E0314 00:13:17.181927 2530 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:13:17.182241 kubelet[2530]: I0314 00:13:17.182228 2530 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:13:17.185796 kubelet[2530]: I0314 00:13:17.185778 2530 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:13:17.186036 kubelet[2530]: I0314 00:13:17.186003 2530 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:13:17.186139 kubelet[2530]: I0314 00:13:17.186031 2530 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-218-137","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:13:17.186209 kubelet[2530]: I0314 00:13:17.186144 2530 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:13:17.186209 kubelet[2530]: I0314 00:13:17.186152 2530 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:13:17.186209 kubelet[2530]: I0314 00:13:17.186170 2530 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:13:17.186338 kubelet[2530]: I0314 00:13:17.186326 2530 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:13:17.186483 kubelet[2530]: I0314 00:13:17.186472 2530 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:13:17.186512 kubelet[2530]: I0314 00:13:17.186489 2530 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:13:17.186512 kubelet[2530]: I0314 00:13:17.186502 2530 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:13:17.186512 kubelet[2530]: I0314 00:13:17.186510 2530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:13:17.189068 kubelet[2530]: I0314 00:13:17.189052 2530 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:13:17.189751 kubelet[2530]: I0314 00:13:17.189736 2530 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:13:17.189832 kubelet[2530]: I0314 00:13:17.189822 2530 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:13:17.192982 kubelet[2530]: I0314 00:13:17.192968 2530 server.go:1257] "Started kubelet" Mar 14 00:13:17.196213 kubelet[2530]: I0314 00:13:17.196059 2530 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:13:17.199823 kubelet[2530]: I0314 00:13:17.197170 2530 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:13:17.199823 kubelet[2530]: I0314 00:13:17.197158 2530 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:13:17.199823 kubelet[2530]: I0314 00:13:17.197870 2530 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:13:17.199823 kubelet[2530]: I0314 00:13:17.198082 2530 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:13:17.201267 kubelet[2530]: I0314 00:13:17.200262 2530 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:13:17.207408 kubelet[2530]: I0314 00:13:17.207382 2530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:13:17.210723 kubelet[2530]: I0314 00:13:17.210703 2530 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:13:17.212661 kubelet[2530]: I0314 00:13:17.212303 2530 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:13:17.212661 kubelet[2530]: I0314 00:13:17.212435 2530 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:13:17.221258 kubelet[2530]: I0314 00:13:17.221242 2530 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:13:17.221809 kubelet[2530]: I0314 00:13:17.221795 2530 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:13:17.221919 kubelet[2530]: I0314 00:13:17.221891 2530 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:13:17.222003 kubelet[2530]: I0314 00:13:17.221985 2530 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:13:17.223420 kubelet[2530]: I0314 00:13:17.223399 2530 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:13:17.223420 kubelet[2530]: I0314 00:13:17.223418 2530 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:13:17.223498 kubelet[2530]: I0314 00:13:17.223442 2530 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:13:17.223498 kubelet[2530]: E0314 00:13:17.223485 2530 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:13:17.291011 kubelet[2530]: I0314 00:13:17.290971 2530 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:13:17.291457 kubelet[2530]: I0314 00:13:17.290985 2530 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:13:17.291457 kubelet[2530]: I0314 00:13:17.291288 2530 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:13:17.292402 kubelet[2530]: I0314 00:13:17.291541 2530 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:13:17.292402 kubelet[2530]: I0314 00:13:17.291554 2530 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:13:17.292402 kubelet[2530]: I0314 00:13:17.291612 2530 policy_none.go:50] "Start" Mar 14 00:13:17.292402 kubelet[2530]: I0314 00:13:17.291621 2530 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:13:17.292402 kubelet[2530]: I0314 00:13:17.291632 2530 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:13:17.292402 kubelet[2530]: I0314 00:13:17.291775 2530 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:13:17.292402 kubelet[2530]: I0314 00:13:17.291784 2530 policy_none.go:44] "Start" Mar 14 00:13:17.301772 kubelet[2530]: E0314 00:13:17.301756 2530 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:13:17.302073 kubelet[2530]: I0314 00:13:17.302062 2530 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:13:17.303051 kubelet[2530]: I0314 00:13:17.303008 2530 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:13:17.303952 kubelet[2530]: I0314 00:13:17.303922 2530 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:13:17.311059 kubelet[2530]: E0314 00:13:17.311039 2530 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:13:17.326768 kubelet[2530]: I0314 00:13:17.326731 2530 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:17.327385 kubelet[2530]: I0314 00:13:17.327356 2530 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-218-137" Mar 14 00:13:17.332758 kubelet[2530]: I0314 00:13:17.332739 2530 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:17.338753 kubelet[2530]: E0314 00:13:17.338720 2530 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-218-137\" already exists" pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:17.340735 kubelet[2530]: E0314 00:13:17.340705 2530 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-218-137\" already exists" pod="kube-system/kube-scheduler-172-233-218-137" Mar 14 00:13:17.340797 kubelet[2530]: E0314 00:13:17.340763 2530 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-218-137\" already exists" pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:17.413307 kubelet[2530]: I0314 00:13:17.412676 2530 kubelet_node_status.go:74] "Attempting to register node" node="172-233-218-137" Mar 14 00:13:17.418610 kubelet[2530]: I0314 00:13:17.418585 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-k8s-certs\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:17.418919 kubelet[2530]: I0314 00:13:17.418721 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-kubeconfig\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:17.418919 kubelet[2530]: I0314 00:13:17.418783 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:17.418919 kubelet[2530]: I0314 00:13:17.418807 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1399db402615b023c182e8e89c95e2e9-k8s-certs\") pod \"kube-apiserver-172-233-218-137\" (UID: \"1399db402615b023c182e8e89c95e2e9\") " pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:17.418919 kubelet[2530]: I0314 00:13:17.418824 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9ed4c6481d908da4d119532ab647c54-kubeconfig\") pod \"kube-scheduler-172-233-218-137\" (UID: \"a9ed4c6481d908da4d119532ab647c54\") " pod="kube-system/kube-scheduler-172-233-218-137" Mar 14 00:13:17.418919 kubelet[2530]: I0314 00:13:17.418851 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1399db402615b023c182e8e89c95e2e9-ca-certs\") pod \"kube-apiserver-172-233-218-137\" (UID: \"1399db402615b023c182e8e89c95e2e9\") " pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:17.419251 kubelet[2530]: I0314 00:13:17.418871 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1399db402615b023c182e8e89c95e2e9-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-218-137\" (UID: \"1399db402615b023c182e8e89c95e2e9\") " pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:17.419251 kubelet[2530]: I0314 00:13:17.418888 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-ca-certs\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:17.419251 kubelet[2530]: I0314 00:13:17.418900 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f05b07907eff29c7aa364b22f6e1bcc0-flexvolume-dir\") pod \"kube-controller-manager-172-233-218-137\" (UID: \"f05b07907eff29c7aa364b22f6e1bcc0\") " pod="kube-system/kube-controller-manager-172-233-218-137" Mar 14 00:13:17.420335 kubelet[2530]: I0314 00:13:17.419933 2530 kubelet_node_status.go:123] "Node was previously registered" node="172-233-218-137" Mar 14 00:13:17.420335 kubelet[2530]: I0314 00:13:17.419986 2530 kubelet_node_status.go:77] "Successfully registered node" node="172-233-218-137" Mar 14 00:13:17.640500 kubelet[2530]: E0314 00:13:17.639963 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:17.640944 kubelet[2530]: E0314 00:13:17.640913 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:17.641734 kubelet[2530]: E0314 00:13:17.641028 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:18.187917 kubelet[2530]: I0314 00:13:18.187838 2530 apiserver.go:52] "Watching apiserver" Mar 14 00:13:18.212881 kubelet[2530]: I0314 00:13:18.212807 2530 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:13:18.256481 kubelet[2530]: E0314 00:13:18.255635 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:18.256481 kubelet[2530]: I0314 00:13:18.255803 2530 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:18.257752 kubelet[2530]: E0314 00:13:18.257037 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:18.265883 kubelet[2530]: E0314 00:13:18.265864 2530 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-218-137\" already exists" pod="kube-system/kube-apiserver-172-233-218-137" Mar 14 00:13:18.267295 kubelet[2530]: E0314 00:13:18.267280 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:18.287752 kubelet[2530]: I0314 00:13:18.287710 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-218-137" podStartSLOduration=2.287701206 podStartE2EDuration="2.287701206s" podCreationTimestamp="2026-03-14 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:18.287594626 +0000 UTC m=+1.163516957" watchObservedRunningTime="2026-03-14 00:13:18.287701206 +0000 UTC m=+1.163623537" Mar 14 00:13:18.288706 kubelet[2530]: I0314 00:13:18.287935 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-218-137" podStartSLOduration=2.287930886 podStartE2EDuration="2.287930886s" podCreationTimestamp="2026-03-14 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:18.280611173 +0000 UTC m=+1.156533504" watchObservedRunningTime="2026-03-14 00:13:18.287930886 +0000 UTC m=+1.163853217" Mar 14 00:13:18.294083 kubelet[2530]: I0314 00:13:18.294047 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-218-137" podStartSLOduration=2.29403986 podStartE2EDuration="2.29403986s" podCreationTimestamp="2026-03-14 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:18.29402251 +0000 UTC m=+1.169944841" watchObservedRunningTime="2026-03-14 00:13:18.29403986 +0000 UTC m=+1.169962201" Mar 14 00:13:19.258326 kubelet[2530]: E0314 00:13:19.258265 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:19.260383 kubelet[2530]: E0314 00:13:19.260332 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:20.260100 kubelet[2530]: E0314 00:13:20.259843 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:20.260100 kubelet[2530]: E0314 00:13:20.259850 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:22.163936 kubelet[2530]: I0314 00:13:22.163881 2530 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:13:22.164596 containerd[1466]: time="2026-03-14T00:13:22.164235839Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:13:22.164896 kubelet[2530]: I0314 00:13:22.164727 2530 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:13:22.800405 systemd[1]: Created slice kubepods-besteffort-pod5ed8bf07_636a_43f4_8703_e3f5822d4b83.slice - libcontainer container kubepods-besteffort-pod5ed8bf07_636a_43f4_8703_e3f5822d4b83.slice. Mar 14 00:13:22.852104 kubelet[2530]: I0314 00:13:22.852035 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ed8bf07-636a-43f4-8703-e3f5822d4b83-kube-proxy\") pod \"kube-proxy-vj4s2\" (UID: \"5ed8bf07-636a-43f4-8703-e3f5822d4b83\") " pod="kube-system/kube-proxy-vj4s2" Mar 14 00:13:22.852104 kubelet[2530]: I0314 00:13:22.852090 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed8bf07-636a-43f4-8703-e3f5822d4b83-lib-modules\") pod \"kube-proxy-vj4s2\" (UID: \"5ed8bf07-636a-43f4-8703-e3f5822d4b83\") " pod="kube-system/kube-proxy-vj4s2" Mar 14 00:13:22.852104 kubelet[2530]: I0314 00:13:22.852110 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5rr\" (UniqueName: \"kubernetes.io/projected/5ed8bf07-636a-43f4-8703-e3f5822d4b83-kube-api-access-bd5rr\") pod \"kube-proxy-vj4s2\" (UID: \"5ed8bf07-636a-43f4-8703-e3f5822d4b83\") " pod="kube-system/kube-proxy-vj4s2" Mar 14 00:13:22.852544 kubelet[2530]: I0314 00:13:22.852129 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed8bf07-636a-43f4-8703-e3f5822d4b83-xtables-lock\") pod \"kube-proxy-vj4s2\" (UID: \"5ed8bf07-636a-43f4-8703-e3f5822d4b83\") " pod="kube-system/kube-proxy-vj4s2" Mar 14 00:13:22.957365 kubelet[2530]: E0314 00:13:22.957327 2530 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 14 00:13:22.957365 kubelet[2530]: E0314 00:13:22.957358 2530 projected.go:196] Error preparing data for projected volume kube-api-access-bd5rr for pod kube-system/kube-proxy-vj4s2: configmap "kube-root-ca.crt" not found Mar 14 00:13:22.957530 kubelet[2530]: E0314 00:13:22.957421 2530 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ed8bf07-636a-43f4-8703-e3f5822d4b83-kube-api-access-bd5rr podName:5ed8bf07-636a-43f4-8703-e3f5822d4b83 nodeName:}" failed. No retries permitted until 2026-03-14 00:13:23.457402266 +0000 UTC m=+6.333324597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bd5rr" (UniqueName: "kubernetes.io/projected/5ed8bf07-636a-43f4-8703-e3f5822d4b83-kube-api-access-bd5rr") pod "kube-proxy-vj4s2" (UID: "5ed8bf07-636a-43f4-8703-e3f5822d4b83") : configmap "kube-root-ca.crt" not found Mar 14 00:13:23.477144 systemd[1]: Created slice kubepods-besteffort-pod2f5dc677_f054_45bb_a7e5_11de1ecdd9a9.slice - libcontainer container kubepods-besteffort-pod2f5dc677_f054_45bb_a7e5_11de1ecdd9a9.slice. Mar 14 00:13:23.556601 kubelet[2530]: I0314 00:13:23.556508 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f5dc677-f054-45bb-a7e5-11de1ecdd9a9-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-kg8r8\" (UID: \"2f5dc677-f054-45bb-a7e5-11de1ecdd9a9\") " pod="tigera-operator/tigera-operator-6cf4cccc57-kg8r8" Mar 14 00:13:23.557388 kubelet[2530]: I0314 00:13:23.556623 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwck4\" (UniqueName: \"kubernetes.io/projected/2f5dc677-f054-45bb-a7e5-11de1ecdd9a9-kube-api-access-rwck4\") pod \"tigera-operator-6cf4cccc57-kg8r8\" (UID: \"2f5dc677-f054-45bb-a7e5-11de1ecdd9a9\") " pod="tigera-operator/tigera-operator-6cf4cccc57-kg8r8" Mar 14 00:13:23.710977 kubelet[2530]: E0314 00:13:23.710944 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:23.711554 containerd[1466]: time="2026-03-14T00:13:23.711503632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vj4s2,Uid:5ed8bf07-636a-43f4-8703-e3f5822d4b83,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:23.734256 containerd[1466]: time="2026-03-14T00:13:23.734097420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:23.734615 containerd[1466]: time="2026-03-14T00:13:23.734150869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:23.734615 containerd[1466]: time="2026-03-14T00:13:23.734442789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:23.734615 containerd[1466]: time="2026-03-14T00:13:23.734517639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:23.756729 systemd[1]: Started cri-containerd-dad9975856b52f41a0d33150ea3d1a034c33ccb65811aa73b5f4fcd1ed0c72c1.scope - libcontainer container dad9975856b52f41a0d33150ea3d1a034c33ccb65811aa73b5f4fcd1ed0c72c1. Mar 14 00:13:23.787049 containerd[1466]: time="2026-03-14T00:13:23.786807167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vj4s2,Uid:5ed8bf07-636a-43f4-8703-e3f5822d4b83,Namespace:kube-system,Attempt:0,} returns sandbox id \"dad9975856b52f41a0d33150ea3d1a034c33ccb65811aa73b5f4fcd1ed0c72c1\"" Mar 14 00:13:23.787488 containerd[1466]: time="2026-03-14T00:13:23.787203136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-kg8r8,Uid:2f5dc677-f054-45bb-a7e5-11de1ecdd9a9,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:13:23.787899 kubelet[2530]: E0314 00:13:23.787881 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:23.794182 containerd[1466]: time="2026-03-14T00:13:23.793362820Z" level=info msg="CreateContainer within sandbox \"dad9975856b52f41a0d33150ea3d1a034c33ccb65811aa73b5f4fcd1ed0c72c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:13:23.804881 containerd[1466]: time="2026-03-14T00:13:23.804820919Z" level=info msg="CreateContainer within sandbox \"dad9975856b52f41a0d33150ea3d1a034c33ccb65811aa73b5f4fcd1ed0c72c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72a0d0d3ae1ea28e55854e5c15fcf64799b2db8c80ac8bd4829689c4fcfbebd1\"" Mar 14 00:13:23.807474 containerd[1466]: time="2026-03-14T00:13:23.807447966Z" level=info msg="StartContainer for \"72a0d0d3ae1ea28e55854e5c15fcf64799b2db8c80ac8bd4829689c4fcfbebd1\"" Mar 14 00:13:23.816876 containerd[1466]: time="2026-03-14T00:13:23.816789767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:23.816951 containerd[1466]: time="2026-03-14T00:13:23.816920217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:23.816976 containerd[1466]: time="2026-03-14T00:13:23.816954847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:23.817140 containerd[1466]: time="2026-03-14T00:13:23.817079537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:23.841734 systemd[1]: Started cri-containerd-abbfaa063e79f8c1879eb3ba950fb6828362ea436593c3d7b6269cd174725af4.scope - libcontainer container abbfaa063e79f8c1879eb3ba950fb6828362ea436593c3d7b6269cd174725af4. Mar 14 00:13:23.846505 systemd[1]: Started cri-containerd-72a0d0d3ae1ea28e55854e5c15fcf64799b2db8c80ac8bd4829689c4fcfbebd1.scope - libcontainer container 72a0d0d3ae1ea28e55854e5c15fcf64799b2db8c80ac8bd4829689c4fcfbebd1. Mar 14 00:13:23.891060 containerd[1466]: time="2026-03-14T00:13:23.890845433Z" level=info msg="StartContainer for \"72a0d0d3ae1ea28e55854e5c15fcf64799b2db8c80ac8bd4829689c4fcfbebd1\" returns successfully" Mar 14 00:13:23.898588 containerd[1466]: time="2026-03-14T00:13:23.898544125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-kg8r8,Uid:2f5dc677-f054-45bb-a7e5-11de1ecdd9a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"abbfaa063e79f8c1879eb3ba950fb6828362ea436593c3d7b6269cd174725af4\"" Mar 14 00:13:23.902509 containerd[1466]: time="2026-03-14T00:13:23.902476391Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:13:24.023448 kubelet[2530]: E0314 00:13:24.023359 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:24.277427 kubelet[2530]: E0314 00:13:24.277288 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:24.328603 kubelet[2530]: E0314 00:13:24.326529 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:24.335326 kubelet[2530]: I0314 00:13:24.335098 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-vj4s2" podStartSLOduration=2.335086519 podStartE2EDuration="2.335086519s" podCreationTimestamp="2026-03-14 00:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:24.289750494 +0000 UTC m=+7.165672825" watchObservedRunningTime="2026-03-14 00:13:24.335086519 +0000 UTC m=+7.211008850" Mar 14 00:13:24.606268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065600121.mount: Deactivated successfully. Mar 14 00:13:25.323525 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:13:26.102269 containerd[1466]: time="2026-03-14T00:13:26.102235081Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:26.103084 containerd[1466]: time="2026-03-14T00:13:26.103054351Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:13:26.104752 containerd[1466]: time="2026-03-14T00:13:26.103613200Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:26.106087 containerd[1466]: time="2026-03-14T00:13:26.105310588Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:26.106087 containerd[1466]: time="2026-03-14T00:13:26.106000278Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.203498237s" Mar 14 00:13:26.106087 containerd[1466]: time="2026-03-14T00:13:26.106024398Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:13:26.109646 containerd[1466]: time="2026-03-14T00:13:26.109625284Z" level=info msg="CreateContainer within sandbox \"abbfaa063e79f8c1879eb3ba950fb6828362ea436593c3d7b6269cd174725af4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:13:26.131288 containerd[1466]: time="2026-03-14T00:13:26.131262972Z" level=info msg="CreateContainer within sandbox \"abbfaa063e79f8c1879eb3ba950fb6828362ea436593c3d7b6269cd174725af4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"043f2b3dedea4ca4f9f6127c84dfb8b8ca3ca7679879221d668737fb66a4196b\"" Mar 14 00:13:26.132663 containerd[1466]: time="2026-03-14T00:13:26.131705682Z" level=info msg="StartContainer for \"043f2b3dedea4ca4f9f6127c84dfb8b8ca3ca7679879221d668737fb66a4196b\"" Mar 14 00:13:26.160910 systemd[1]: run-containerd-runc-k8s.io-043f2b3dedea4ca4f9f6127c84dfb8b8ca3ca7679879221d668737fb66a4196b-runc.OCA41N.mount: Deactivated successfully. Mar 14 00:13:26.175679 systemd[1]: Started cri-containerd-043f2b3dedea4ca4f9f6127c84dfb8b8ca3ca7679879221d668737fb66a4196b.scope - libcontainer container 043f2b3dedea4ca4f9f6127c84dfb8b8ca3ca7679879221d668737fb66a4196b. Mar 14 00:13:26.200029 containerd[1466]: time="2026-03-14T00:13:26.199993804Z" level=info msg="StartContainer for \"043f2b3dedea4ca4f9f6127c84dfb8b8ca3ca7679879221d668737fb66a4196b\" returns successfully" Mar 14 00:13:26.291774 kubelet[2530]: I0314 00:13:26.290709 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-kg8r8" podStartSLOduration=1.086260327 podStartE2EDuration="3.290696833s" podCreationTimestamp="2026-03-14 00:13:23 +0000 UTC" firstStartedPulling="2026-03-14 00:13:23.902209791 +0000 UTC m=+6.778132122" lastFinishedPulling="2026-03-14 00:13:26.106646297 +0000 UTC m=+8.982568628" observedRunningTime="2026-03-14 00:13:26.290505603 +0000 UTC m=+9.166427934" watchObservedRunningTime="2026-03-14 00:13:26.290696833 +0000 UTC m=+9.166619174" Mar 14 00:13:28.961769 kubelet[2530]: E0314 00:13:28.961549 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:31.747842 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:31.773203 sshd[1676]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:31.780868 systemd[1]: sshd@6-172.233.218.137:22-4.153.228.146:41884.service: Deactivated successfully. Mar 14 00:13:31.784253 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:13:31.785188 systemd[1]: session-7.scope: Consumed 3.075s CPU time, 155.7M memory peak, 0B memory swap peak. Mar 14 00:13:31.786011 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:13:31.788200 systemd-logind[1444]: Removed session 7. Mar 14 00:13:34.031416 kubelet[2530]: E0314 00:13:34.031332 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:34.105958 systemd[1]: Created slice kubepods-besteffort-pod16a3651a_c595_44b1_bfb2_1a549db585a3.slice - libcontainer container kubepods-besteffort-pod16a3651a_c595_44b1_bfb2_1a549db585a3.slice. Mar 14 00:13:34.129592 kubelet[2530]: I0314 00:13:34.129463 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16a3651a-c595-44b1-bfb2-1a549db585a3-tigera-ca-bundle\") pod \"calico-typha-56569967bc-924qj\" (UID: \"16a3651a-c595-44b1-bfb2-1a549db585a3\") " pod="calico-system/calico-typha-56569967bc-924qj" Mar 14 00:13:34.129857 kubelet[2530]: I0314 00:13:34.129776 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bvk\" (UniqueName: \"kubernetes.io/projected/16a3651a-c595-44b1-bfb2-1a549db585a3-kube-api-access-r8bvk\") pod \"calico-typha-56569967bc-924qj\" (UID: \"16a3651a-c595-44b1-bfb2-1a549db585a3\") " pod="calico-system/calico-typha-56569967bc-924qj" Mar 14 00:13:34.129857 kubelet[2530]: I0314 00:13:34.129806 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/16a3651a-c595-44b1-bfb2-1a549db585a3-typha-certs\") pod \"calico-typha-56569967bc-924qj\" (UID: \"16a3651a-c595-44b1-bfb2-1a549db585a3\") " pod="calico-system/calico-typha-56569967bc-924qj" Mar 14 00:13:34.169526 systemd[1]: Created slice kubepods-besteffort-podacf70b84_3cf5_43db_a36b_238e4da83dcb.slice - libcontainer container kubepods-besteffort-podacf70b84_3cf5_43db_a36b_238e4da83dcb.slice. Mar 14 00:13:34.230361 kubelet[2530]: I0314 00:13:34.230330 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-bpffs\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230361 kubelet[2530]: I0314 00:13:34.230363 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-cni-net-dir\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230517 kubelet[2530]: I0314 00:13:34.230378 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-lib-modules\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230517 kubelet[2530]: I0314 00:13:34.230392 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-nodeproc\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230517 kubelet[2530]: I0314 00:13:34.230405 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-var-lib-calico\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230517 kubelet[2530]: I0314 00:13:34.230418 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-var-run-calico\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230517 kubelet[2530]: I0314 00:13:34.230439 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-flexvol-driver-host\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230669 kubelet[2530]: I0314 00:13:34.230453 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-policysync\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230669 kubelet[2530]: I0314 00:13:34.230472 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-cni-bin-dir\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230669 kubelet[2530]: I0314 00:13:34.230505 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-cni-log-dir\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230669 kubelet[2530]: I0314 00:13:34.230521 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-sys-fs\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230669 kubelet[2530]: I0314 00:13:34.230534 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf70b84-3cf5-43db-a36b-238e4da83dcb-tigera-ca-bundle\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230779 kubelet[2530]: I0314 00:13:34.230548 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acf70b84-3cf5-43db-a36b-238e4da83dcb-xtables-lock\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230779 kubelet[2530]: I0314 00:13:34.230579 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65sv8\" (UniqueName: \"kubernetes.io/projected/acf70b84-3cf5-43db-a36b-238e4da83dcb-kube-api-access-65sv8\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.230779 kubelet[2530]: I0314 00:13:34.230595 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/acf70b84-3cf5-43db-a36b-238e4da83dcb-node-certs\") pod \"calico-node-6rbzj\" (UID: \"acf70b84-3cf5-43db-a36b-238e4da83dcb\") " pod="calico-system/calico-node-6rbzj" Mar 14 00:13:34.273808 kubelet[2530]: E0314 00:13:34.273765 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlvrd" podUID="efc9842b-5041-4fb5-bc21-b23964d856d2" Mar 14 00:13:34.331675 kubelet[2530]: E0314 00:13:34.330721 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:34.331675 kubelet[2530]: I0314 00:13:34.330816 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/efc9842b-5041-4fb5-bc21-b23964d856d2-registration-dir\") pod \"csi-node-driver-mlvrd\" (UID: \"efc9842b-5041-4fb5-bc21-b23964d856d2\") " pod="calico-system/csi-node-driver-mlvrd" Mar 14 00:13:34.331675 kubelet[2530]: I0314 00:13:34.330844 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efc9842b-5041-4fb5-bc21-b23964d856d2-kubelet-dir\") pod \"csi-node-driver-mlvrd\" (UID: \"efc9842b-5041-4fb5-bc21-b23964d856d2\") " pod="calico-system/csi-node-driver-mlvrd" Mar 14 00:13:34.331675 kubelet[2530]: I0314 00:13:34.330868 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/efc9842b-5041-4fb5-bc21-b23964d856d2-socket-dir\") pod \"csi-node-driver-mlvrd\" (UID: \"efc9842b-5041-4fb5-bc21-b23964d856d2\") " pod="calico-system/csi-node-driver-mlvrd" Mar 14 00:13:34.331675 kubelet[2530]: I0314 00:13:34.330927 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/efc9842b-5041-4fb5-bc21-b23964d856d2-varrun\") pod \"csi-node-driver-mlvrd\" (UID: \"efc9842b-5041-4fb5-bc21-b23964d856d2\") " pod="calico-system/csi-node-driver-mlvrd" Mar 14 00:13:34.331675 kubelet[2530]: I0314 00:13:34.330951 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mztm\" (UniqueName: \"kubernetes.io/projected/efc9842b-5041-4fb5-bc21-b23964d856d2-kube-api-access-8mztm\") pod \"csi-node-driver-mlvrd\" (UID: \"efc9842b-5041-4fb5-bc21-b23964d856d2\") " pod="calico-system/csi-node-driver-mlvrd" Mar 14 00:13:34.339457 kubelet[2530]: E0314 00:13:34.339238 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.339457 kubelet[2530]: W0314 00:13:34.339271 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.339457 kubelet[2530]: E0314 00:13:34.339317 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.340383 kubelet[2530]: E0314 00:13:34.340354 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.340418 kubelet[2530]: W0314 00:13:34.340369 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.340418 kubelet[2530]: E0314 00:13:34.340400 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.340779 kubelet[2530]: E0314 00:13:34.340759 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.340779 kubelet[2530]: W0314 00:13:34.340773 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.340839 kubelet[2530]: E0314 00:13:34.340785 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.343581 kubelet[2530]: E0314 00:13:34.342005 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.343581 kubelet[2530]: W0314 00:13:34.342017 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.343581 kubelet[2530]: E0314 00:13:34.342026 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.346316 kubelet[2530]: E0314 00:13:34.346302 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.347048 kubelet[2530]: W0314 00:13:34.347033 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.347116 kubelet[2530]: E0314 00:13:34.347104 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.347405 kubelet[2530]: E0314 00:13:34.347394 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.347480 kubelet[2530]: W0314 00:13:34.347446 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.347480 kubelet[2530]: E0314 00:13:34.347459 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.347852 kubelet[2530]: E0314 00:13:34.347841 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.347911 kubelet[2530]: W0314 00:13:34.347900 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.347958 kubelet[2530]: E0314 00:13:34.347948 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.348242 kubelet[2530]: E0314 00:13:34.348231 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.348379 kubelet[2530]: W0314 00:13:34.348288 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.348379 kubelet[2530]: E0314 00:13:34.348300 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.348531 kubelet[2530]: E0314 00:13:34.348519 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.348704 kubelet[2530]: W0314 00:13:34.348612 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.348704 kubelet[2530]: E0314 00:13:34.348627 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.348941 kubelet[2530]: E0314 00:13:34.348911 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.348941 kubelet[2530]: W0314 00:13:34.348921 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.348941 kubelet[2530]: E0314 00:13:34.348930 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.349374 kubelet[2530]: E0314 00:13:34.349262 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.349374 kubelet[2530]: W0314 00:13:34.349272 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.349374 kubelet[2530]: E0314 00:13:34.349280 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.350332 kubelet[2530]: E0314 00:13:34.350261 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.350332 kubelet[2530]: W0314 00:13:34.350272 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.350332 kubelet[2530]: E0314 00:13:34.350281 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.350928 kubelet[2530]: E0314 00:13:34.350866 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.350928 kubelet[2530]: W0314 00:13:34.350877 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.350928 kubelet[2530]: E0314 00:13:34.350886 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.364614 kubelet[2530]: E0314 00:13:34.363148 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.364614 kubelet[2530]: W0314 00:13:34.363166 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.364614 kubelet[2530]: E0314 00:13:34.363181 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.411453 kubelet[2530]: E0314 00:13:34.411136 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:34.412124 containerd[1466]: time="2026-03-14T00:13:34.411722221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56569967bc-924qj,Uid:16a3651a-c595-44b1-bfb2-1a549db585a3,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:34.414114 kubelet[2530]: E0314 00:13:34.414076 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.414114 kubelet[2530]: W0314 00:13:34.414092 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.414114 kubelet[2530]: E0314 00:13:34.414108 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.414642 kubelet[2530]: E0314 00:13:34.414340 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.414642 kubelet[2530]: W0314 00:13:34.414351 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.414642 kubelet[2530]: E0314 00:13:34.414361 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.414642 kubelet[2530]: E0314 00:13:34.414545 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.414642 kubelet[2530]: W0314 00:13:34.414553 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.414642 kubelet[2530]: E0314 00:13:34.414578 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.414826 kubelet[2530]: E0314 00:13:34.414802 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.414826 kubelet[2530]: W0314 00:13:34.414811 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.414826 kubelet[2530]: E0314 00:13:34.414820 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.415040 kubelet[2530]: E0314 00:13:34.415005 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.415040 kubelet[2530]: W0314 00:13:34.415015 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.415040 kubelet[2530]: E0314 00:13:34.415023 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.415225 kubelet[2530]: E0314 00:13:34.415201 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.415225 kubelet[2530]: W0314 00:13:34.415213 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.415225 kubelet[2530]: E0314 00:13:34.415221 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.415626 kubelet[2530]: E0314 00:13:34.415406 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.415626 kubelet[2530]: W0314 00:13:34.415414 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.415626 kubelet[2530]: E0314 00:13:34.415421 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.415813 kubelet[2530]: E0314 00:13:34.415681 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.415813 kubelet[2530]: W0314 00:13:34.415689 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.415813 kubelet[2530]: E0314 00:13:34.415698 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.415962 kubelet[2530]: E0314 00:13:34.415950 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.415962 kubelet[2530]: W0314 00:13:34.415960 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.416038 kubelet[2530]: E0314 00:13:34.415968 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.416274 kubelet[2530]: E0314 00:13:34.416258 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.416274 kubelet[2530]: W0314 00:13:34.416271 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.416347 kubelet[2530]: E0314 00:13:34.416283 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.416994 kubelet[2530]: E0314 00:13:34.416656 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.417048 kubelet[2530]: W0314 00:13:34.416994 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.417048 kubelet[2530]: E0314 00:13:34.417006 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.417324 kubelet[2530]: E0314 00:13:34.417310 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.417324 kubelet[2530]: W0314 00:13:34.417322 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.417406 kubelet[2530]: E0314 00:13:34.417330 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.417762 kubelet[2530]: E0314 00:13:34.417687 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.417762 kubelet[2530]: W0314 00:13:34.417700 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.417762 kubelet[2530]: E0314 00:13:34.417708 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.418968 kubelet[2530]: E0314 00:13:34.418139 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.418968 kubelet[2530]: W0314 00:13:34.418150 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.418968 kubelet[2530]: E0314 00:13:34.418159 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.418968 kubelet[2530]: E0314 00:13:34.418443 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.418968 kubelet[2530]: W0314 00:13:34.418451 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.418968 kubelet[2530]: E0314 00:13:34.418459 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.420805 kubelet[2530]: E0314 00:13:34.420785 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.420842 kubelet[2530]: W0314 00:13:34.420830 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.420900 kubelet[2530]: E0314 00:13:34.420841 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.422506 kubelet[2530]: E0314 00:13:34.421179 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.422506 kubelet[2530]: W0314 00:13:34.421193 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.422506 kubelet[2530]: E0314 00:13:34.421202 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.422506 kubelet[2530]: E0314 00:13:34.421509 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.422506 kubelet[2530]: W0314 00:13:34.421517 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.422506 kubelet[2530]: E0314 00:13:34.421548 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.422506 kubelet[2530]: E0314 00:13:34.422342 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.422506 kubelet[2530]: W0314 00:13:34.422350 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.422506 kubelet[2530]: E0314 00:13:34.422360 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.423477 kubelet[2530]: E0314 00:13:34.423033 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.423477 kubelet[2530]: W0314 00:13:34.423044 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.423477 kubelet[2530]: E0314 00:13:34.423053 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.423477 kubelet[2530]: E0314 00:13:34.423356 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.423477 kubelet[2530]: W0314 00:13:34.423365 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.423477 kubelet[2530]: E0314 00:13:34.423374 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.424294 kubelet[2530]: E0314 00:13:34.423996 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.424294 kubelet[2530]: W0314 00:13:34.424196 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.424294 kubelet[2530]: E0314 00:13:34.424206 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.424946 kubelet[2530]: E0314 00:13:34.424887 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.424946 kubelet[2530]: W0314 00:13:34.424920 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.424946 kubelet[2530]: E0314 00:13:34.424929 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.425745 kubelet[2530]: E0314 00:13:34.425647 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.425745 kubelet[2530]: W0314 00:13:34.425669 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.425745 kubelet[2530]: E0314 00:13:34.425683 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.426612 kubelet[2530]: E0314 00:13:34.426000 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.426612 kubelet[2530]: W0314 00:13:34.426010 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.426612 kubelet[2530]: E0314 00:13:34.426019 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.432963 kubelet[2530]: E0314 00:13:34.432773 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.432963 kubelet[2530]: W0314 00:13:34.432785 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.432963 kubelet[2530]: E0314 00:13:34.432797 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.433190 kubelet[2530]: E0314 00:13:34.433172 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.433229 kubelet[2530]: W0314 00:13:34.433213 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.433229 kubelet[2530]: E0314 00:13:34.433225 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.433595 kubelet[2530]: E0314 00:13:34.433576 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.433595 kubelet[2530]: W0314 00:13:34.433592 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.433759 kubelet[2530]: E0314 00:13:34.433602 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.434066 kubelet[2530]: E0314 00:13:34.433875 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.434066 kubelet[2530]: W0314 00:13:34.433889 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.434066 kubelet[2530]: E0314 00:13:34.433897 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.434235 kubelet[2530]: E0314 00:13:34.434168 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.434235 kubelet[2530]: W0314 00:13:34.434179 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.434235 kubelet[2530]: E0314 00:13:34.434187 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.434586 kubelet[2530]: E0314 00:13:34.434453 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.434586 kubelet[2530]: W0314 00:13:34.434461 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.434586 kubelet[2530]: E0314 00:13:34.434469 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.434953 kubelet[2530]: E0314 00:13:34.434719 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.434953 kubelet[2530]: W0314 00:13:34.434729 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.434953 kubelet[2530]: E0314 00:13:34.434737 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.435361 kubelet[2530]: E0314 00:13:34.435290 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.435361 kubelet[2530]: W0314 00:13:34.435301 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.435361 kubelet[2530]: E0314 00:13:34.435309 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.436114 kubelet[2530]: E0314 00:13:34.436098 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.436114 kubelet[2530]: W0314 00:13:34.436111 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.436226 kubelet[2530]: E0314 00:13:34.436120 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.436459 kubelet[2530]: E0314 00:13:34.436418 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.436538 kubelet[2530]: W0314 00:13:34.436430 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.436538 kubelet[2530]: E0314 00:13:34.436506 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.436986 kubelet[2530]: E0314 00:13:34.436930 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.436986 kubelet[2530]: W0314 00:13:34.436963 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.436986 kubelet[2530]: E0314 00:13:34.436973 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.437373 kubelet[2530]: E0314 00:13:34.437334 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.437373 kubelet[2530]: W0314 00:13:34.437345 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.437373 kubelet[2530]: E0314 00:13:34.437353 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.437737 kubelet[2530]: E0314 00:13:34.437725 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.437848 kubelet[2530]: W0314 00:13:34.437780 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.437848 kubelet[2530]: E0314 00:13:34.437792 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.438174 kubelet[2530]: E0314 00:13:34.438086 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.438174 kubelet[2530]: W0314 00:13:34.438096 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.438174 kubelet[2530]: E0314 00:13:34.438105 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.438499 kubelet[2530]: E0314 00:13:34.438428 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.438499 kubelet[2530]: W0314 00:13:34.438438 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.438499 kubelet[2530]: E0314 00:13:34.438447 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.438958 kubelet[2530]: E0314 00:13:34.438863 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.438958 kubelet[2530]: W0314 00:13:34.438874 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.438958 kubelet[2530]: E0314 00:13:34.438882 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.439330 kubelet[2530]: E0314 00:13:34.439242 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.439330 kubelet[2530]: W0314 00:13:34.439251 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.439330 kubelet[2530]: E0314 00:13:34.439259 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.439689 kubelet[2530]: E0314 00:13:34.439616 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.439689 kubelet[2530]: W0314 00:13:34.439627 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.439689 kubelet[2530]: E0314 00:13:34.439635 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.440094 kubelet[2530]: E0314 00:13:34.439972 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.440094 kubelet[2530]: W0314 00:13:34.440002 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.440094 kubelet[2530]: E0314 00:13:34.440011 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.440876 kubelet[2530]: E0314 00:13:34.440619 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.440876 kubelet[2530]: W0314 00:13:34.440630 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.440876 kubelet[2530]: E0314 00:13:34.440638 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.441096 kubelet[2530]: E0314 00:13:34.440994 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.441096 kubelet[2530]: W0314 00:13:34.441004 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.441096 kubelet[2530]: E0314 00:13:34.441012 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.441583 containerd[1466]: time="2026-03-14T00:13:34.440906465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:34.441583 containerd[1466]: time="2026-03-14T00:13:34.441451443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:34.441583 containerd[1466]: time="2026-03-14T00:13:34.441465255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:34.441669 kubelet[2530]: E0314 00:13:34.441446 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.441669 kubelet[2530]: W0314 00:13:34.441455 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.441669 kubelet[2530]: E0314 00:13:34.441464 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.441745 containerd[1466]: time="2026-03-14T00:13:34.441680662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:34.442089 kubelet[2530]: E0314 00:13:34.442006 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.442089 kubelet[2530]: W0314 00:13:34.442017 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.442089 kubelet[2530]: E0314 00:13:34.442026 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.443147 kubelet[2530]: E0314 00:13:34.442881 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.443147 kubelet[2530]: W0314 00:13:34.442893 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.443147 kubelet[2530]: E0314 00:13:34.442902 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.443364 kubelet[2530]: E0314 00:13:34.443352 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.443518 kubelet[2530]: W0314 00:13:34.443505 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.443613 kubelet[2530]: E0314 00:13:34.443573 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.450882 kubelet[2530]: E0314 00:13:34.450815 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:34.450882 kubelet[2530]: W0314 00:13:34.450827 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:34.450882 kubelet[2530]: E0314 00:13:34.450859 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:34.465884 systemd[1]: Started cri-containerd-315628c08271654cc3fbf4592af58df40b0919439f3fbe9778f330256dceec27.scope - libcontainer container 315628c08271654cc3fbf4592af58df40b0919439f3fbe9778f330256dceec27. Mar 14 00:13:34.476628 containerd[1466]: time="2026-03-14T00:13:34.476346133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6rbzj,Uid:acf70b84-3cf5-43db-a36b-238e4da83dcb,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:34.501152 containerd[1466]: time="2026-03-14T00:13:34.500966641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:34.501152 containerd[1466]: time="2026-03-14T00:13:34.501013485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:34.501152 containerd[1466]: time="2026-03-14T00:13:34.501027296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:34.501152 containerd[1466]: time="2026-03-14T00:13:34.501099542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:34.522104 containerd[1466]: time="2026-03-14T00:13:34.522004788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56569967bc-924qj,Uid:16a3651a-c595-44b1-bfb2-1a549db585a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"315628c08271654cc3fbf4592af58df40b0919439f3fbe9778f330256dceec27\"" Mar 14 00:13:34.523301 kubelet[2530]: E0314 00:13:34.523277 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:34.524277 systemd[1]: Started cri-containerd-b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc.scope - libcontainer container b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc. Mar 14 00:13:34.526718 containerd[1466]: time="2026-03-14T00:13:34.526692834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:13:34.549629 containerd[1466]: time="2026-03-14T00:13:34.549551460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6rbzj,Uid:acf70b84-3cf5-43db-a36b-238e4da83dcb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\"" Mar 14 00:13:35.652847 containerd[1466]: time="2026-03-14T00:13:35.652808569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:35.656376 containerd[1466]: time="2026-03-14T00:13:35.656343317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:13:35.656928 containerd[1466]: time="2026-03-14T00:13:35.656889231Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:35.658697 containerd[1466]: time="2026-03-14T00:13:35.658677156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:35.659982 containerd[1466]: time="2026-03-14T00:13:35.659344831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.132623385s" Mar 14 00:13:35.659982 containerd[1466]: time="2026-03-14T00:13:35.659369863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:13:35.661814 containerd[1466]: time="2026-03-14T00:13:35.661794480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:13:35.679035 containerd[1466]: time="2026-03-14T00:13:35.679015471Z" level=info msg="CreateContainer within sandbox \"315628c08271654cc3fbf4592af58df40b0919439f3fbe9778f330256dceec27\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:13:35.689740 containerd[1466]: time="2026-03-14T00:13:35.689707251Z" level=info msg="CreateContainer within sandbox \"315628c08271654cc3fbf4592af58df40b0919439f3fbe9778f330256dceec27\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9e8dd657071115a091794e8f35d2304a9439a046036debb344f393adb6be6fb9\"" Mar 14 00:13:35.691567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661567266.mount: Deactivated successfully. Mar 14 00:13:35.693061 containerd[1466]: time="2026-03-14T00:13:35.692539092Z" level=info msg="StartContainer for \"9e8dd657071115a091794e8f35d2304a9439a046036debb344f393adb6be6fb9\"" Mar 14 00:13:35.721687 systemd[1]: Started cri-containerd-9e8dd657071115a091794e8f35d2304a9439a046036debb344f393adb6be6fb9.scope - libcontainer container 9e8dd657071115a091794e8f35d2304a9439a046036debb344f393adb6be6fb9. Mar 14 00:13:35.765237 containerd[1466]: time="2026-03-14T00:13:35.765204852Z" level=info msg="StartContainer for \"9e8dd657071115a091794e8f35d2304a9439a046036debb344f393adb6be6fb9\" returns successfully" Mar 14 00:13:36.224532 kubelet[2530]: E0314 00:13:36.224475 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlvrd" podUID="efc9842b-5041-4fb5-bc21-b23964d856d2" Mar 14 00:13:36.312134 kubelet[2530]: E0314 00:13:36.312091 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:36.326044 kubelet[2530]: I0314 00:13:36.325968 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-56569967bc-924qj" podStartSLOduration=1.190368209 podStartE2EDuration="2.325954302s" podCreationTimestamp="2026-03-14 00:13:34 +0000 UTC" firstStartedPulling="2026-03-14 00:13:34.525278922 +0000 UTC m=+17.401201263" lastFinishedPulling="2026-03-14 00:13:35.660865025 +0000 UTC m=+18.536787356" observedRunningTime="2026-03-14 00:13:36.325826412 +0000 UTC m=+19.201748763" watchObservedRunningTime="2026-03-14 00:13:36.325954302 +0000 UTC m=+19.201876633" Mar 14 00:13:36.330213 containerd[1466]: time="2026-03-14T00:13:36.330169143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:36.330906 containerd[1466]: time="2026-03-14T00:13:36.330845294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:13:36.331635 containerd[1466]: time="2026-03-14T00:13:36.331333862Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:36.333786 containerd[1466]: time="2026-03-14T00:13:36.333756246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:36.334407 containerd[1466]: time="2026-03-14T00:13:36.334369923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 672.483635ms" Mar 14 00:13:36.334443 containerd[1466]: time="2026-03-14T00:13:36.334407866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:13:36.336983 kubelet[2530]: E0314 00:13:36.336950 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.337057 kubelet[2530]: W0314 00:13:36.337040 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.337189 kubelet[2530]: E0314 00:13:36.337120 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.337552 kubelet[2530]: E0314 00:13:36.337531 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.337552 kubelet[2530]: W0314 00:13:36.337550 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.337634 kubelet[2530]: E0314 00:13:36.337598 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.337990 kubelet[2530]: E0314 00:13:36.337960 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.337990 kubelet[2530]: W0314 00:13:36.337976 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.337990 kubelet[2530]: E0314 00:13:36.337985 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.338267 kubelet[2530]: E0314 00:13:36.338237 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.338267 kubelet[2530]: W0314 00:13:36.338253 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.338267 kubelet[2530]: E0314 00:13:36.338270 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.339275 kubelet[2530]: E0314 00:13:36.339193 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.339275 kubelet[2530]: W0314 00:13:36.339211 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.339275 kubelet[2530]: E0314 00:13:36.339223 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.340470 kubelet[2530]: E0314 00:13:36.340126 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.340470 kubelet[2530]: W0314 00:13:36.340143 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.340470 kubelet[2530]: E0314 00:13:36.340154 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.340873 kubelet[2530]: E0314 00:13:36.340842 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.340873 kubelet[2530]: W0314 00:13:36.340860 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.340873 kubelet[2530]: E0314 00:13:36.340870 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.341315 kubelet[2530]: E0314 00:13:36.341279 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.341380 kubelet[2530]: W0314 00:13:36.341356 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.341503 kubelet[2530]: E0314 00:13:36.341436 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.342127 containerd[1466]: time="2026-03-14T00:13:36.341993114Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:13:36.342183 kubelet[2530]: E0314 00:13:36.342031 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.342183 kubelet[2530]: W0314 00:13:36.342041 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.342183 kubelet[2530]: E0314 00:13:36.342051 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.343050 kubelet[2530]: E0314 00:13:36.342675 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.343050 kubelet[2530]: W0314 00:13:36.342692 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.343050 kubelet[2530]: E0314 00:13:36.342704 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.343132 kubelet[2530]: E0314 00:13:36.343107 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.343132 kubelet[2530]: W0314 00:13:36.343117 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.343132 kubelet[2530]: E0314 00:13:36.343125 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.344615 kubelet[2530]: E0314 00:13:36.343648 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.344615 kubelet[2530]: W0314 00:13:36.343699 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.344615 kubelet[2530]: E0314 00:13:36.343709 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.344615 kubelet[2530]: E0314 00:13:36.344330 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.344615 kubelet[2530]: W0314 00:13:36.344339 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.344615 kubelet[2530]: E0314 00:13:36.344348 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.345213 kubelet[2530]: E0314 00:13:36.344990 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.345213 kubelet[2530]: W0314 00:13:36.345004 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.345213 kubelet[2530]: E0314 00:13:36.345017 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.345314 kubelet[2530]: E0314 00:13:36.345288 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.345314 kubelet[2530]: W0314 00:13:36.345310 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.345370 kubelet[2530]: E0314 00:13:36.345323 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.346683 kubelet[2530]: E0314 00:13:36.346657 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.346683 kubelet[2530]: W0314 00:13:36.346676 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.346771 kubelet[2530]: E0314 00:13:36.346716 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.347109 kubelet[2530]: E0314 00:13:36.347087 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.347109 kubelet[2530]: W0314 00:13:36.347104 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.347171 kubelet[2530]: E0314 00:13:36.347114 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.347444 kubelet[2530]: E0314 00:13:36.347420 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.347444 kubelet[2530]: W0314 00:13:36.347436 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.347444 kubelet[2530]: E0314 00:13:36.347446 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.348143 kubelet[2530]: E0314 00:13:36.347830 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.348143 kubelet[2530]: W0314 00:13:36.347844 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.348143 kubelet[2530]: E0314 00:13:36.347881 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.348230 kubelet[2530]: E0314 00:13:36.348210 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.348230 kubelet[2530]: W0314 00:13:36.348219 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.348230 kubelet[2530]: E0314 00:13:36.348228 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.348683 kubelet[2530]: E0314 00:13:36.348659 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.348683 kubelet[2530]: W0314 00:13:36.348677 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.348683 kubelet[2530]: E0314 00:13:36.348686 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.349372 kubelet[2530]: E0314 00:13:36.349357 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.349589 kubelet[2530]: W0314 00:13:36.349429 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.349589 kubelet[2530]: E0314 00:13:36.349469 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.349837 kubelet[2530]: E0314 00:13:36.349813 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.349837 kubelet[2530]: W0314 00:13:36.349827 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.349837 kubelet[2530]: E0314 00:13:36.349836 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.350366 kubelet[2530]: E0314 00:13:36.350289 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.350366 kubelet[2530]: W0314 00:13:36.350301 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.350366 kubelet[2530]: E0314 00:13:36.350311 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.351241 kubelet[2530]: E0314 00:13:36.351221 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.351241 kubelet[2530]: W0314 00:13:36.351236 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.351341 kubelet[2530]: E0314 00:13:36.351247 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.351685 kubelet[2530]: E0314 00:13:36.351652 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.351685 kubelet[2530]: W0314 00:13:36.351668 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.351685 kubelet[2530]: E0314 00:13:36.351677 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.352380 kubelet[2530]: E0314 00:13:36.352348 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.352380 kubelet[2530]: W0314 00:13:36.352365 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.352380 kubelet[2530]: E0314 00:13:36.352374 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.353006 kubelet[2530]: E0314 00:13:36.352978 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.353006 kubelet[2530]: W0314 00:13:36.352996 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.353006 kubelet[2530]: E0314 00:13:36.353005 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.353456 kubelet[2530]: E0314 00:13:36.353427 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.353456 kubelet[2530]: W0314 00:13:36.353444 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.353456 kubelet[2530]: E0314 00:13:36.353454 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.356589 kubelet[2530]: E0314 00:13:36.355541 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.356589 kubelet[2530]: W0314 00:13:36.355593 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.356589 kubelet[2530]: E0314 00:13:36.355604 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.356667 kubelet[2530]: E0314 00:13:36.356624 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.356667 kubelet[2530]: W0314 00:13:36.356634 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.356667 kubelet[2530]: E0314 00:13:36.356645 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.359635 kubelet[2530]: E0314 00:13:36.358822 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.359635 kubelet[2530]: W0314 00:13:36.358841 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.359635 kubelet[2530]: E0314 00:13:36.358851 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.359635 kubelet[2530]: E0314 00:13:36.359274 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:36.359635 kubelet[2530]: W0314 00:13:36.359283 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:36.359635 kubelet[2530]: E0314 00:13:36.359291 2530 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:36.363831 containerd[1466]: time="2026-03-14T00:13:36.363738421Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54\"" Mar 14 00:13:36.365647 containerd[1466]: time="2026-03-14T00:13:36.364550533Z" level=info msg="StartContainer for \"352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54\"" Mar 14 00:13:36.406827 systemd[1]: Started cri-containerd-352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54.scope - libcontainer container 352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54. Mar 14 00:13:36.438418 containerd[1466]: time="2026-03-14T00:13:36.438374418Z" level=info msg="StartContainer for \"352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54\" returns successfully" Mar 14 00:13:36.452488 systemd[1]: cri-containerd-352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54.scope: Deactivated successfully. Mar 14 00:13:36.584774 containerd[1466]: time="2026-03-14T00:13:36.584698808Z" level=info msg="shim disconnected" id=352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54 namespace=k8s.io Mar 14 00:13:36.584774 containerd[1466]: time="2026-03-14T00:13:36.584748441Z" level=warning msg="cleaning up after shim disconnected" id=352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54 namespace=k8s.io Mar 14 00:13:36.584774 containerd[1466]: time="2026-03-14T00:13:36.584776003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:37.236666 systemd[1]: run-containerd-runc-k8s.io-352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54-runc.3UIqf6.mount: Deactivated successfully. Mar 14 00:13:37.236775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-352e8ded9d6839338b1f5b741af8c9813435020c38ef5332c0d1b41b73922b54-rootfs.mount: Deactivated successfully. Mar 14 00:13:37.312341 kubelet[2530]: I0314 00:13:37.311824 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:37.312341 kubelet[2530]: E0314 00:13:37.312081 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:37.314412 containerd[1466]: time="2026-03-14T00:13:37.314367885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:13:38.225583 kubelet[2530]: E0314 00:13:38.224354 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlvrd" podUID="efc9842b-5041-4fb5-bc21-b23964d856d2" Mar 14 00:13:39.420337 update_engine[1445]: I20260314 00:13:39.420280 1445 update_attempter.cc:509] Updating boot flags... Mar 14 00:13:39.480583 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3262) Mar 14 00:13:40.224091 kubelet[2530]: E0314 00:13:40.223760 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlvrd" podUID="efc9842b-5041-4fb5-bc21-b23964d856d2" Mar 14 00:13:40.807531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103619961.mount: Deactivated successfully. Mar 14 00:13:40.834874 containerd[1466]: time="2026-03-14T00:13:40.834824779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:40.838586 containerd[1466]: time="2026-03-14T00:13:40.836694438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:13:40.840534 containerd[1466]: time="2026-03-14T00:13:40.840489450Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:40.844139 containerd[1466]: time="2026-03-14T00:13:40.844016707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:40.845210 containerd[1466]: time="2026-03-14T00:13:40.845178855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.530777798s" Mar 14 00:13:40.845210 containerd[1466]: time="2026-03-14T00:13:40.845205077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:13:40.849827 containerd[1466]: time="2026-03-14T00:13:40.849792716Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:13:40.863264 containerd[1466]: time="2026-03-14T00:13:40.863235044Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7\"" Mar 14 00:13:40.864086 containerd[1466]: time="2026-03-14T00:13:40.864005299Z" level=info msg="StartContainer for \"49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7\"" Mar 14 00:13:40.900709 systemd[1]: Started cri-containerd-49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7.scope - libcontainer container 49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7. Mar 14 00:13:40.932405 containerd[1466]: time="2026-03-14T00:13:40.932166555Z" level=info msg="StartContainer for \"49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7\" returns successfully" Mar 14 00:13:40.984922 systemd[1]: cri-containerd-49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7.scope: Deactivated successfully. Mar 14 00:13:41.111997 containerd[1466]: time="2026-03-14T00:13:41.111867312Z" level=info msg="shim disconnected" id=49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7 namespace=k8s.io Mar 14 00:13:41.111997 containerd[1466]: time="2026-03-14T00:13:41.111910864Z" level=warning msg="cleaning up after shim disconnected" id=49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7 namespace=k8s.io Mar 14 00:13:41.111997 containerd[1466]: time="2026-03-14T00:13:41.111920475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:41.323190 containerd[1466]: time="2026-03-14T00:13:41.323106360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:13:41.809778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49c2d88a95d8d542a08592f6c45b8f18c7c3849fe15d0213978441c4f3cc6ac7-rootfs.mount: Deactivated successfully. Mar 14 00:13:42.224031 kubelet[2530]: E0314 00:13:42.223905 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlvrd" podUID="efc9842b-5041-4fb5-bc21-b23964d856d2" Mar 14 00:13:42.927200 containerd[1466]: time="2026-03-14T00:13:42.927138111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:42.928346 containerd[1466]: time="2026-03-14T00:13:42.928316521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:13:42.929001 containerd[1466]: time="2026-03-14T00:13:42.928963505Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:42.931397 containerd[1466]: time="2026-03-14T00:13:42.931232731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:42.932037 containerd[1466]: time="2026-03-14T00:13:42.932011532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.608868411s" Mar 14 00:13:42.932087 containerd[1466]: time="2026-03-14T00:13:42.932038553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:13:42.935357 containerd[1466]: time="2026-03-14T00:13:42.935323962Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:13:42.949115 containerd[1466]: time="2026-03-14T00:13:42.949088899Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6\"" Mar 14 00:13:42.949538 containerd[1466]: time="2026-03-14T00:13:42.949506201Z" level=info msg="StartContainer for \"4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6\"" Mar 14 00:13:42.980409 systemd[1]: run-containerd-runc-k8s.io-4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6-runc.DSGMI0.mount: Deactivated successfully. Mar 14 00:13:42.994694 systemd[1]: Started cri-containerd-4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6.scope - libcontainer container 4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6. Mar 14 00:13:43.024106 containerd[1466]: time="2026-03-14T00:13:43.024071632Z" level=info msg="StartContainer for \"4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6\" returns successfully" Mar 14 00:13:43.513255 containerd[1466]: time="2026-03-14T00:13:43.513188936Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:13:43.517074 systemd[1]: cri-containerd-4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6.scope: Deactivated successfully. Mar 14 00:13:43.547500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6-rootfs.mount: Deactivated successfully. Mar 14 00:13:43.570342 containerd[1466]: time="2026-03-14T00:13:43.570283375Z" level=info msg="shim disconnected" id=4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6 namespace=k8s.io Mar 14 00:13:43.570342 containerd[1466]: time="2026-03-14T00:13:43.570333937Z" level=warning msg="cleaning up after shim disconnected" id=4ffcd2725a31d3f1b5d03c278eda1f8531ff6a4eb1598190ba04c5c3388470f6 namespace=k8s.io Mar 14 00:13:43.570342 containerd[1466]: time="2026-03-14T00:13:43.570343008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:43.582038 kubelet[2530]: I0314 00:13:43.582003 2530 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:13:43.622184 systemd[1]: Created slice kubepods-burstable-podb627313e_4d3d_42f3_aad2_4c1df6199113.slice - libcontainer container kubepods-burstable-podb627313e_4d3d_42f3_aad2_4c1df6199113.slice. Mar 14 00:13:43.639063 systemd[1]: Created slice kubepods-besteffort-pod5071b63d_65a6_4318_a3dc_58009573e7ce.slice - libcontainer container kubepods-besteffort-pod5071b63d_65a6_4318_a3dc_58009573e7ce.slice. Mar 14 00:13:43.649287 systemd[1]: Created slice kubepods-burstable-pod6c418ad6_e69b_4b1f_bded_2e2a531bde69.slice - libcontainer container kubepods-burstable-pod6c418ad6_e69b_4b1f_bded_2e2a531bde69.slice. Mar 14 00:13:43.660355 systemd[1]: Created slice kubepods-besteffort-pod9494d07f_08cc_408c_af89_27df5fb41f1e.slice - libcontainer container kubepods-besteffort-pod9494d07f_08cc_408c_af89_27df5fb41f1e.slice. Mar 14 00:13:43.672835 systemd[1]: Created slice kubepods-besteffort-podc9b5e89e_6a6d_45ac_beaa_0696f3422320.slice - libcontainer container kubepods-besteffort-podc9b5e89e_6a6d_45ac_beaa_0696f3422320.slice. Mar 14 00:13:43.681931 systemd[1]: Created slice kubepods-besteffort-pod890dbf4b_5710_47dc_9a3c_1eca2584bc93.slice - libcontainer container kubepods-besteffort-pod890dbf4b_5710_47dc_9a3c_1eca2584bc93.slice. Mar 14 00:13:43.690871 systemd[1]: Created slice kubepods-besteffort-podde2e34ff_5bb6_4f87_994e_5128b1af60f5.slice - libcontainer container kubepods-besteffort-podde2e34ff_5bb6_4f87_994e_5128b1af60f5.slice. Mar 14 00:13:43.700527 kubelet[2530]: I0314 00:13:43.700503 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dnhb\" (UniqueName: \"kubernetes.io/projected/890dbf4b-5710-47dc-9a3c-1eca2584bc93-kube-api-access-9dnhb\") pod \"calico-apiserver-6dcdbf46fb-kgmhm\" (UID: \"890dbf4b-5710-47dc-9a3c-1eca2584bc93\") " pod="calico-system/calico-apiserver-6dcdbf46fb-kgmhm" Mar 14 00:13:43.700677 kubelet[2530]: I0314 00:13:43.700662 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c418ad6-e69b-4b1f-bded-2e2a531bde69-config-volume\") pod \"coredns-7d764666f9-h2qg5\" (UID: \"6c418ad6-e69b-4b1f-bded-2e2a531bde69\") " pod="kube-system/coredns-7d764666f9-h2qg5" Mar 14 00:13:43.700757 kubelet[2530]: I0314 00:13:43.700732 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcv2c\" (UniqueName: \"kubernetes.io/projected/c9b5e89e-6a6d-45ac-beaa-0696f3422320-kube-api-access-pcv2c\") pod \"goldmane-9f7667bb8-5lvd2\" (UID: \"c9b5e89e-6a6d-45ac-beaa-0696f3422320\") " pod="calico-system/goldmane-9f7667bb8-5lvd2" Mar 14 00:13:43.701990 kubelet[2530]: I0314 00:13:43.701131 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/890dbf4b-5710-47dc-9a3c-1eca2584bc93-calico-apiserver-certs\") pod \"calico-apiserver-6dcdbf46fb-kgmhm\" (UID: \"890dbf4b-5710-47dc-9a3c-1eca2584bc93\") " pod="calico-system/calico-apiserver-6dcdbf46fb-kgmhm" Mar 14 00:13:43.701990 kubelet[2530]: I0314 00:13:43.701172 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-backend-key-pair\") pod \"whisker-7bc96cdf7c-gtz6k\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " pod="calico-system/whisker-7bc96cdf7c-gtz6k" Mar 14 00:13:43.701990 kubelet[2530]: I0314 00:13:43.701200 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wbzr\" (UniqueName: \"kubernetes.io/projected/de2e34ff-5bb6-4f87-994e-5128b1af60f5-kube-api-access-4wbzr\") pod \"whisker-7bc96cdf7c-gtz6k\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " pod="calico-system/whisker-7bc96cdf7c-gtz6k" Mar 14 00:13:43.701990 kubelet[2530]: I0314 00:13:43.701217 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9494d07f-08cc-408c-af89-27df5fb41f1e-calico-apiserver-certs\") pod \"calico-apiserver-6dcdbf46fb-8h4g2\" (UID: \"9494d07f-08cc-408c-af89-27df5fb41f1e\") " pod="calico-system/calico-apiserver-6dcdbf46fb-8h4g2" Mar 14 00:13:43.701990 kubelet[2530]: I0314 00:13:43.701230 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5071b63d-65a6-4318-a3dc-58009573e7ce-tigera-ca-bundle\") pod \"calico-kube-controllers-7b54b945f5-vqsjc\" (UID: \"5071b63d-65a6-4318-a3dc-58009573e7ce\") " pod="calico-system/calico-kube-controllers-7b54b945f5-vqsjc" Mar 14 00:13:43.702126 kubelet[2530]: I0314 00:13:43.701245 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hbv\" (UniqueName: \"kubernetes.io/projected/6c418ad6-e69b-4b1f-bded-2e2a531bde69-kube-api-access-m7hbv\") pod \"coredns-7d764666f9-h2qg5\" (UID: \"6c418ad6-e69b-4b1f-bded-2e2a531bde69\") " pod="kube-system/coredns-7d764666f9-h2qg5" Mar 14 00:13:43.702126 kubelet[2530]: I0314 00:13:43.701261 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-nginx-config\") pod \"whisker-7bc96cdf7c-gtz6k\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " pod="calico-system/whisker-7bc96cdf7c-gtz6k" Mar 14 00:13:43.702126 kubelet[2530]: I0314 00:13:43.701295 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-ca-bundle\") pod \"whisker-7bc96cdf7c-gtz6k\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " pod="calico-system/whisker-7bc96cdf7c-gtz6k" Mar 14 00:13:43.702126 kubelet[2530]: I0314 00:13:43.701309 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b627313e-4d3d-42f3-aad2-4c1df6199113-config-volume\") pod \"coredns-7d764666f9-pngzk\" (UID: \"b627313e-4d3d-42f3-aad2-4c1df6199113\") " pod="kube-system/coredns-7d764666f9-pngzk" Mar 14 00:13:43.702126 kubelet[2530]: I0314 00:13:43.701323 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvjjz\" (UniqueName: \"kubernetes.io/projected/5071b63d-65a6-4318-a3dc-58009573e7ce-kube-api-access-mvjjz\") pod \"calico-kube-controllers-7b54b945f5-vqsjc\" (UID: \"5071b63d-65a6-4318-a3dc-58009573e7ce\") " pod="calico-system/calico-kube-controllers-7b54b945f5-vqsjc" Mar 14 00:13:43.702234 kubelet[2530]: I0314 00:13:43.701337 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9b5e89e-6a6d-45ac-beaa-0696f3422320-config\") pod \"goldmane-9f7667bb8-5lvd2\" (UID: \"c9b5e89e-6a6d-45ac-beaa-0696f3422320\") " pod="calico-system/goldmane-9f7667bb8-5lvd2" Mar 14 00:13:43.702234 kubelet[2530]: I0314 00:13:43.701355 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqwd\" (UniqueName: \"kubernetes.io/projected/9494d07f-08cc-408c-af89-27df5fb41f1e-kube-api-access-hzqwd\") pod \"calico-apiserver-6dcdbf46fb-8h4g2\" (UID: \"9494d07f-08cc-408c-af89-27df5fb41f1e\") " pod="calico-system/calico-apiserver-6dcdbf46fb-8h4g2" Mar 14 00:13:43.702234 kubelet[2530]: I0314 00:13:43.701368 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9b5e89e-6a6d-45ac-beaa-0696f3422320-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-5lvd2\" (UID: \"c9b5e89e-6a6d-45ac-beaa-0696f3422320\") " pod="calico-system/goldmane-9f7667bb8-5lvd2" Mar 14 00:13:43.702234 kubelet[2530]: I0314 00:13:43.701383 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c9b5e89e-6a6d-45ac-beaa-0696f3422320-goldmane-key-pair\") pod \"goldmane-9f7667bb8-5lvd2\" (UID: \"c9b5e89e-6a6d-45ac-beaa-0696f3422320\") " pod="calico-system/goldmane-9f7667bb8-5lvd2" Mar 14 00:13:43.702234 kubelet[2530]: I0314 00:13:43.701405 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqcsm\" (UniqueName: \"kubernetes.io/projected/b627313e-4d3d-42f3-aad2-4c1df6199113-kube-api-access-sqcsm\") pod \"coredns-7d764666f9-pngzk\" (UID: \"b627313e-4d3d-42f3-aad2-4c1df6199113\") " pod="kube-system/coredns-7d764666f9-pngzk" Mar 14 00:13:43.932781 kubelet[2530]: E0314 00:13:43.930743 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:43.932914 containerd[1466]: time="2026-03-14T00:13:43.931954826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pngzk,Uid:b627313e-4d3d-42f3-aad2-4c1df6199113,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:43.954105 containerd[1466]: time="2026-03-14T00:13:43.951309737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b54b945f5-vqsjc,Uid:5071b63d-65a6-4318-a3dc-58009573e7ce,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:43.970641 kubelet[2530]: E0314 00:13:43.969372 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:43.971156 containerd[1466]: time="2026-03-14T00:13:43.971134041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-8h4g2,Uid:9494d07f-08cc-408c-af89-27df5fb41f1e,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:43.971802 containerd[1466]: time="2026-03-14T00:13:43.971783403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-h2qg5,Uid:6c418ad6-e69b-4b1f-bded-2e2a531bde69,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:43.983169 containerd[1466]: time="2026-03-14T00:13:43.983147749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5lvd2,Uid:c9b5e89e-6a6d-45ac-beaa-0696f3422320,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:43.992520 containerd[1466]: time="2026-03-14T00:13:43.992470348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-kgmhm,Uid:890dbf4b-5710-47dc-9a3c-1eca2584bc93,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:43.995493 containerd[1466]: time="2026-03-14T00:13:43.995472703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc96cdf7c-gtz6k,Uid:de2e34ff-5bb6-4f87-994e-5128b1af60f5,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:44.151831 containerd[1466]: time="2026-03-14T00:13:44.151772416Z" level=error msg="Failed to destroy network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.152645 containerd[1466]: time="2026-03-14T00:13:44.152443746Z" level=error msg="encountered an error cleaning up failed sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.152645 containerd[1466]: time="2026-03-14T00:13:44.152524860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5lvd2,Uid:c9b5e89e-6a6d-45ac-beaa-0696f3422320,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.153183 containerd[1466]: time="2026-03-14T00:13:44.152281899Z" level=error msg="Failed to destroy network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.153377 containerd[1466]: time="2026-03-14T00:13:44.153331136Z" level=error msg="encountered an error cleaning up failed sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.153432 containerd[1466]: time="2026-03-14T00:13:44.153399539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pngzk,Uid:b627313e-4d3d-42f3-aad2-4c1df6199113,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.154384 kubelet[2530]: E0314 00:13:44.154339 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.154430 kubelet[2530]: E0314 00:13:44.154408 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5lvd2" Mar 14 00:13:44.154503 kubelet[2530]: E0314 00:13:44.154431 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5lvd2" Mar 14 00:13:44.154889 kubelet[2530]: E0314 00:13:44.154777 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.154889 kubelet[2530]: E0314 00:13:44.154803 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pngzk" Mar 14 00:13:44.154889 kubelet[2530]: E0314 00:13:44.154818 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pngzk" Mar 14 00:13:44.155584 kubelet[2530]: E0314 00:13:44.155311 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-5lvd2_calico-system(c9b5e89e-6a6d-45ac-beaa-0696f3422320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-5lvd2_calico-system(c9b5e89e-6a6d-45ac-beaa-0696f3422320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-5lvd2" podUID="c9b5e89e-6a6d-45ac-beaa-0696f3422320" Mar 14 00:13:44.155584 kubelet[2530]: E0314 00:13:44.155328 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-pngzk_kube-system(b627313e-4d3d-42f3-aad2-4c1df6199113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-pngzk_kube-system(b627313e-4d3d-42f3-aad2-4c1df6199113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-pngzk" podUID="b627313e-4d3d-42f3-aad2-4c1df6199113" Mar 14 00:13:44.204772 containerd[1466]: time="2026-03-14T00:13:44.204478241Z" level=error msg="Failed to destroy network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.207375 containerd[1466]: time="2026-03-14T00:13:44.207156572Z" level=error msg="encountered an error cleaning up failed sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.207375 containerd[1466]: time="2026-03-14T00:13:44.207207224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b54b945f5-vqsjc,Uid:5071b63d-65a6-4318-a3dc-58009573e7ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.208212 kubelet[2530]: E0314 00:13:44.207976 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.208212 kubelet[2530]: E0314 00:13:44.208024 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b54b945f5-vqsjc" Mar 14 00:13:44.208212 kubelet[2530]: E0314 00:13:44.208041 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b54b945f5-vqsjc" Mar 14 00:13:44.208448 kubelet[2530]: E0314 00:13:44.208082 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b54b945f5-vqsjc_calico-system(5071b63d-65a6-4318-a3dc-58009573e7ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b54b945f5-vqsjc_calico-system(5071b63d-65a6-4318-a3dc-58009573e7ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b54b945f5-vqsjc" podUID="5071b63d-65a6-4318-a3dc-58009573e7ce" Mar 14 00:13:44.233160 containerd[1466]: time="2026-03-14T00:13:44.233125693Z" level=error msg="Failed to destroy network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.234039 systemd[1]: Created slice kubepods-besteffort-podefc9842b_5041_4fb5_bc21_b23964d856d2.slice - libcontainer container kubepods-besteffort-podefc9842b_5041_4fb5_bc21_b23964d856d2.slice. Mar 14 00:13:44.235183 containerd[1466]: time="2026-03-14T00:13:44.234985977Z" level=error msg="encountered an error cleaning up failed sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.235183 containerd[1466]: time="2026-03-14T00:13:44.235145013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-8h4g2,Uid:9494d07f-08cc-408c-af89-27df5fb41f1e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.237150 kubelet[2530]: E0314 00:13:44.237100 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.237150 kubelet[2530]: E0314 00:13:44.237142 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6dcdbf46fb-8h4g2" Mar 14 00:13:44.237248 kubelet[2530]: E0314 00:13:44.237157 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6dcdbf46fb-8h4g2" Mar 14 00:13:44.237248 kubelet[2530]: E0314 00:13:44.237193 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcdbf46fb-8h4g2_calico-system(9494d07f-08cc-408c-af89-27df5fb41f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcdbf46fb-8h4g2_calico-system(9494d07f-08cc-408c-af89-27df5fb41f1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6dcdbf46fb-8h4g2" podUID="9494d07f-08cc-408c-af89-27df5fb41f1e" Mar 14 00:13:44.237952 containerd[1466]: time="2026-03-14T00:13:44.237360943Z" level=error msg="Failed to destroy network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.238776 containerd[1466]: time="2026-03-14T00:13:44.238375649Z" level=error msg="encountered an error cleaning up failed sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.238776 containerd[1466]: time="2026-03-14T00:13:44.238414021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-h2qg5,Uid:6c418ad6-e69b-4b1f-bded-2e2a531bde69,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.238894 kubelet[2530]: E0314 00:13:44.238873 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.238966 kubelet[2530]: E0314 00:13:44.238951 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-h2qg5" Mar 14 00:13:44.239033 kubelet[2530]: E0314 00:13:44.239014 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-h2qg5" Mar 14 00:13:44.239112 kubelet[2530]: E0314 00:13:44.239093 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-h2qg5_kube-system(6c418ad6-e69b-4b1f-bded-2e2a531bde69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-h2qg5_kube-system(6c418ad6-e69b-4b1f-bded-2e2a531bde69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-h2qg5" podUID="6c418ad6-e69b-4b1f-bded-2e2a531bde69" Mar 14 00:13:44.244242 containerd[1466]: time="2026-03-14T00:13:44.244218432Z" level=error msg="Failed to destroy network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.245334 containerd[1466]: time="2026-03-14T00:13:44.245310011Z" level=error msg="encountered an error cleaning up failed sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.245518 containerd[1466]: time="2026-03-14T00:13:44.245473339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-kgmhm,Uid:890dbf4b-5710-47dc-9a3c-1eca2584bc93,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.245802 containerd[1466]: time="2026-03-14T00:13:44.245784733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlvrd,Uid:efc9842b-5041-4fb5-bc21-b23964d856d2,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:44.246011 kubelet[2530]: E0314 00:13:44.245924 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.246011 kubelet[2530]: E0314 00:13:44.245970 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6dcdbf46fb-kgmhm" Mar 14 00:13:44.246011 kubelet[2530]: E0314 00:13:44.245985 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6dcdbf46fb-kgmhm" Mar 14 00:13:44.246369 kubelet[2530]: E0314 00:13:44.246145 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dcdbf46fb-kgmhm_calico-system(890dbf4b-5710-47dc-9a3c-1eca2584bc93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dcdbf46fb-kgmhm_calico-system(890dbf4b-5710-47dc-9a3c-1eca2584bc93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6dcdbf46fb-kgmhm" podUID="890dbf4b-5710-47dc-9a3c-1eca2584bc93" Mar 14 00:13:44.258539 containerd[1466]: time="2026-03-14T00:13:44.258487775Z" level=error msg="Failed to destroy network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.260585 containerd[1466]: time="2026-03-14T00:13:44.259234579Z" level=error msg="encountered an error cleaning up failed sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.260585 containerd[1466]: time="2026-03-14T00:13:44.259331653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc96cdf7c-gtz6k,Uid:de2e34ff-5bb6-4f87-994e-5128b1af60f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.260696 kubelet[2530]: E0314 00:13:44.259914 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.260696 kubelet[2530]: E0314 00:13:44.259944 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc96cdf7c-gtz6k" Mar 14 00:13:44.260696 kubelet[2530]: E0314 00:13:44.259960 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc96cdf7c-gtz6k" Mar 14 00:13:44.260768 kubelet[2530]: E0314 00:13:44.259991 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bc96cdf7c-gtz6k_calico-system(de2e34ff-5bb6-4f87-994e-5128b1af60f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bc96cdf7c-gtz6k_calico-system(de2e34ff-5bb6-4f87-994e-5128b1af60f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bc96cdf7c-gtz6k" podUID="de2e34ff-5bb6-4f87-994e-5128b1af60f5" Mar 14 00:13:44.299916 containerd[1466]: time="2026-03-14T00:13:44.299873331Z" level=error msg="Failed to destroy network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.300230 containerd[1466]: time="2026-03-14T00:13:44.300183064Z" level=error msg="encountered an error cleaning up failed sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.300722 containerd[1466]: time="2026-03-14T00:13:44.300668737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlvrd,Uid:efc9842b-5041-4fb5-bc21-b23964d856d2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.300898 kubelet[2530]: E0314 00:13:44.300853 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.300962 kubelet[2530]: E0314 00:13:44.300913 2530 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mlvrd" Mar 14 00:13:44.300962 kubelet[2530]: E0314 00:13:44.300930 2530 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mlvrd" Mar 14 00:13:44.301027 kubelet[2530]: E0314 00:13:44.300974 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mlvrd_calico-system(efc9842b-5041-4fb5-bc21-b23964d856d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mlvrd_calico-system(efc9842b-5041-4fb5-bc21-b23964d856d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mlvrd" podUID="efc9842b-5041-4fb5-bc21-b23964d856d2" Mar 14 00:13:44.337062 kubelet[2530]: I0314 00:13:44.336324 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:13:44.337460 containerd[1466]: time="2026-03-14T00:13:44.337313988Z" level=info msg="StopPodSandbox for \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\"" Mar 14 00:13:44.337460 containerd[1466]: time="2026-03-14T00:13:44.337454194Z" level=info msg="Ensure that sandbox 12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e in task-service has been cleanup successfully" Mar 14 00:13:44.342981 kubelet[2530]: I0314 00:13:44.342439 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:13:44.344254 containerd[1466]: time="2026-03-14T00:13:44.344194338Z" level=info msg="StopPodSandbox for \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\"" Mar 14 00:13:44.344354 containerd[1466]: time="2026-03-14T00:13:44.344333384Z" level=info msg="Ensure that sandbox 4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4 in task-service has been cleanup successfully" Mar 14 00:13:44.350612 kubelet[2530]: I0314 00:13:44.350403 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:13:44.352862 containerd[1466]: time="2026-03-14T00:13:44.352841538Z" level=info msg="StopPodSandbox for \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\"" Mar 14 00:13:44.355238 containerd[1466]: time="2026-03-14T00:13:44.355218805Z" level=info msg="Ensure that sandbox f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2 in task-service has been cleanup successfully" Mar 14 00:13:44.359308 containerd[1466]: time="2026-03-14T00:13:44.359281168Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:13:44.361421 kubelet[2530]: I0314 00:13:44.361398 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:13:44.367973 containerd[1466]: time="2026-03-14T00:13:44.367841004Z" level=info msg="StopPodSandbox for \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\"" Mar 14 00:13:44.367973 containerd[1466]: time="2026-03-14T00:13:44.367953699Z" level=info msg="Ensure that sandbox 7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013 in task-service has been cleanup successfully" Mar 14 00:13:44.376172 kubelet[2530]: I0314 00:13:44.374643 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:13:44.382906 containerd[1466]: time="2026-03-14T00:13:44.382880182Z" level=info msg="StopPodSandbox for \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\"" Mar 14 00:13:44.383083 containerd[1466]: time="2026-03-14T00:13:44.383065991Z" level=info msg="Ensure that sandbox 02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4 in task-service has been cleanup successfully" Mar 14 00:13:44.389930 kubelet[2530]: I0314 00:13:44.389908 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:13:44.392076 containerd[1466]: time="2026-03-14T00:13:44.391826325Z" level=info msg="StopPodSandbox for \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\"" Mar 14 00:13:44.392404 containerd[1466]: time="2026-03-14T00:13:44.392268084Z" level=info msg="Ensure that sandbox 79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c in task-service has been cleanup successfully" Mar 14 00:13:44.406613 kubelet[2530]: I0314 00:13:44.406471 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:13:44.407361 containerd[1466]: time="2026-03-14T00:13:44.406987068Z" level=info msg="StopPodSandbox for \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\"" Mar 14 00:13:44.407361 containerd[1466]: time="2026-03-14T00:13:44.407137755Z" level=info msg="Ensure that sandbox afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790 in task-service has been cleanup successfully" Mar 14 00:13:44.420485 kubelet[2530]: I0314 00:13:44.420207 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:13:44.423924 containerd[1466]: time="2026-03-14T00:13:44.423882250Z" level=info msg="StopPodSandbox for \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\"" Mar 14 00:13:44.425606 containerd[1466]: time="2026-03-14T00:13:44.425376757Z" level=info msg="Ensure that sandbox c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15 in task-service has been cleanup successfully" Mar 14 00:13:44.433442 containerd[1466]: time="2026-03-14T00:13:44.433395728Z" level=info msg="CreateContainer within sandbox \"b3c0795ffd805f1afc6cdecc694245d818c8a8e5d8a6540be3616b985caf00dc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48ce37f6a2bf643b9b0f5e48b8c63f3b130c6394ac0c87c5e4eac0d495106335\"" Mar 14 00:13:44.436250 containerd[1466]: time="2026-03-14T00:13:44.436124202Z" level=info msg="StartContainer for \"48ce37f6a2bf643b9b0f5e48b8c63f3b130c6394ac0c87c5e4eac0d495106335\"" Mar 14 00:13:44.464075 containerd[1466]: time="2026-03-14T00:13:44.463754166Z" level=error msg="StopPodSandbox for \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\" failed" error="failed to destroy network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.465172 kubelet[2530]: E0314 00:13:44.464726 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:13:44.465172 kubelet[2530]: E0314 00:13:44.464766 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e"} Mar 14 00:13:44.465172 kubelet[2530]: E0314 00:13:44.464831 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efc9842b-5041-4fb5-bc21-b23964d856d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.465172 kubelet[2530]: E0314 00:13:44.464852 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efc9842b-5041-4fb5-bc21-b23964d856d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mlvrd" podUID="efc9842b-5041-4fb5-bc21-b23964d856d2" Mar 14 00:13:44.491403 containerd[1466]: time="2026-03-14T00:13:44.491328399Z" level=error msg="StopPodSandbox for \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\" failed" error="failed to destroy network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.491904 kubelet[2530]: E0314 00:13:44.491861 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:13:44.491966 kubelet[2530]: E0314 00:13:44.491942 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4"} Mar 14 00:13:44.492029 kubelet[2530]: E0314 00:13:44.492011 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"890dbf4b-5710-47dc-9a3c-1eca2584bc93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.492095 kubelet[2530]: E0314 00:13:44.492038 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"890dbf4b-5710-47dc-9a3c-1eca2584bc93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6dcdbf46fb-kgmhm" podUID="890dbf4b-5710-47dc-9a3c-1eca2584bc93" Mar 14 00:13:44.507215 containerd[1466]: time="2026-03-14T00:13:44.507185413Z" level=error msg="StopPodSandbox for \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\" failed" error="failed to destroy network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.507497 kubelet[2530]: E0314 00:13:44.507437 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:13:44.507538 kubelet[2530]: E0314 00:13:44.507505 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2"} Mar 14 00:13:44.507602 kubelet[2530]: E0314 00:13:44.507588 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b627313e-4d3d-42f3-aad2-4c1df6199113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.508273 kubelet[2530]: E0314 00:13:44.507715 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b627313e-4d3d-42f3-aad2-4c1df6199113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-pngzk" podUID="b627313e-4d3d-42f3-aad2-4c1df6199113" Mar 14 00:13:44.515202 containerd[1466]: time="2026-03-14T00:13:44.515175623Z" level=error msg="StopPodSandbox for \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\" failed" error="failed to destroy network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.515841 kubelet[2530]: E0314 00:13:44.515817 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:13:44.515956 kubelet[2530]: E0314 00:13:44.515939 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013"} Mar 14 00:13:44.516101 kubelet[2530]: E0314 00:13:44.516086 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.516344 kubelet[2530]: E0314 00:13:44.516315 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bc96cdf7c-gtz6k" podUID="de2e34ff-5bb6-4f87-994e-5128b1af60f5" Mar 14 00:13:44.527824 systemd[1]: Started cri-containerd-48ce37f6a2bf643b9b0f5e48b8c63f3b130c6394ac0c87c5e4eac0d495106335.scope - libcontainer container 48ce37f6a2bf643b9b0f5e48b8c63f3b130c6394ac0c87c5e4eac0d495106335. Mar 14 00:13:44.536710 containerd[1466]: time="2026-03-14T00:13:44.536681103Z" level=error msg="StopPodSandbox for \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\" failed" error="failed to destroy network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.536920 kubelet[2530]: E0314 00:13:44.536892 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:13:44.537031 kubelet[2530]: E0314 00:13:44.537015 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4"} Mar 14 00:13:44.537365 kubelet[2530]: E0314 00:13:44.537310 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c418ad6-e69b-4b1f-bded-2e2a531bde69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.537365 kubelet[2530]: E0314 00:13:44.537336 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c418ad6-e69b-4b1f-bded-2e2a531bde69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-h2qg5" podUID="6c418ad6-e69b-4b1f-bded-2e2a531bde69" Mar 14 00:13:44.538989 containerd[1466]: time="2026-03-14T00:13:44.538953196Z" level=error msg="StopPodSandbox for \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\" failed" error="failed to destroy network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.539183 kubelet[2530]: E0314 00:13:44.539162 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:13:44.539267 kubelet[2530]: E0314 00:13:44.539253 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15"} Mar 14 00:13:44.539331 kubelet[2530]: E0314 00:13:44.539319 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5071b63d-65a6-4318-a3dc-58009573e7ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.539413 kubelet[2530]: E0314 00:13:44.539397 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5071b63d-65a6-4318-a3dc-58009573e7ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b54b945f5-vqsjc" podUID="5071b63d-65a6-4318-a3dc-58009573e7ce" Mar 14 00:13:44.540364 containerd[1466]: time="2026-03-14T00:13:44.540321387Z" level=error msg="StopPodSandbox for \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\" failed" error="failed to destroy network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.541129 kubelet[2530]: E0314 00:13:44.541033 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:13:44.541129 kubelet[2530]: E0314 00:13:44.541060 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c"} Mar 14 00:13:44.541129 kubelet[2530]: E0314 00:13:44.541078 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9b5e89e-6a6d-45ac-beaa-0696f3422320\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.541129 kubelet[2530]: E0314 00:13:44.541097 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9b5e89e-6a6d-45ac-beaa-0696f3422320\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-5lvd2" podUID="c9b5e89e-6a6d-45ac-beaa-0696f3422320" Mar 14 00:13:44.548730 containerd[1466]: time="2026-03-14T00:13:44.548697085Z" level=error msg="StopPodSandbox for \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\" failed" error="failed to destroy network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:44.549021 kubelet[2530]: E0314 00:13:44.548830 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:13:44.549021 kubelet[2530]: E0314 00:13:44.548855 2530 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790"} Mar 14 00:13:44.549021 kubelet[2530]: E0314 00:13:44.548874 2530 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9494d07f-08cc-408c-af89-27df5fb41f1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:44.549021 kubelet[2530]: E0314 00:13:44.548893 2530 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9494d07f-08cc-408c-af89-27df5fb41f1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6dcdbf46fb-8h4g2" podUID="9494d07f-08cc-408c-af89-27df5fb41f1e" Mar 14 00:13:44.574041 containerd[1466]: time="2026-03-14T00:13:44.574013516Z" level=info msg="StartContainer for \"48ce37f6a2bf643b9b0f5e48b8c63f3b130c6394ac0c87c5e4eac0d495106335\" returns successfully" Mar 14 00:13:44.950031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4-shm.mount: Deactivated successfully. Mar 14 00:13:44.950278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15-shm.mount: Deactivated successfully. Mar 14 00:13:44.950369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2-shm.mount: Deactivated successfully. Mar 14 00:13:45.434350 containerd[1466]: time="2026-03-14T00:13:45.433869543Z" level=info msg="StopPodSandbox for \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\"" Mar 14 00:13:45.473818 kubelet[2530]: I0314 00:13:45.473019 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-6rbzj" podStartSLOduration=1.689151546 podStartE2EDuration="11.473001874s" podCreationTimestamp="2026-03-14 00:13:34 +0000 UTC" firstStartedPulling="2026-03-14 00:13:34.551331464 +0000 UTC m=+17.427253795" lastFinishedPulling="2026-03-14 00:13:44.335181782 +0000 UTC m=+27.211104123" observedRunningTime="2026-03-14 00:13:45.452063361 +0000 UTC m=+28.327985692" watchObservedRunningTime="2026-03-14 00:13:45.473001874 +0000 UTC m=+28.348924205" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.504 [INFO][3807] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.504 [INFO][3807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" iface="eth0" netns="/var/run/netns/cni-f7381a99-d115-1fbf-157b-0f88cc1b682d" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.504 [INFO][3807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" iface="eth0" netns="/var/run/netns/cni-f7381a99-d115-1fbf-157b-0f88cc1b682d" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.505 [INFO][3807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" iface="eth0" netns="/var/run/netns/cni-f7381a99-d115-1fbf-157b-0f88cc1b682d" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.505 [INFO][3807] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.505 [INFO][3807] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.525 [INFO][3814] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.525 [INFO][3814] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.525 [INFO][3814] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.530 [WARNING][3814] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.530 [INFO][3814] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.531 [INFO][3814] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:45.538655 containerd[1466]: 2026-03-14 00:13:45.535 [INFO][3807] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:13:45.541025 containerd[1466]: time="2026-03-14T00:13:45.540979802Z" level=info msg="TearDown network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\" successfully" Mar 14 00:13:45.541025 containerd[1466]: time="2026-03-14T00:13:45.541015264Z" level=info msg="StopPodSandbox for \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\" returns successfully" Mar 14 00:13:45.542193 systemd[1]: run-netns-cni\x2df7381a99\x2dd115\x2d1fbf\x2d157b\x2d0f88cc1b682d.mount: Deactivated successfully. Mar 14 00:13:45.618311 kubelet[2530]: I0314 00:13:45.618222 2530 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-backend-key-pair\") pod \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " Mar 14 00:13:45.618311 kubelet[2530]: I0314 00:13:45.618267 2530 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-ca-bundle\") pod \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " Mar 14 00:13:45.618311 kubelet[2530]: I0314 00:13:45.618311 2530 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-nginx-config\" (UniqueName: \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-nginx-config\") pod \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " Mar 14 00:13:45.618464 kubelet[2530]: I0314 00:13:45.618332 2530 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/de2e34ff-5bb6-4f87-994e-5128b1af60f5-kube-api-access-4wbzr\" (UniqueName: \"kubernetes.io/projected/de2e34ff-5bb6-4f87-994e-5128b1af60f5-kube-api-access-4wbzr\") pod \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\" (UID: \"de2e34ff-5bb6-4f87-994e-5128b1af60f5\") " Mar 14 00:13:45.620107 kubelet[2530]: I0314 00:13:45.619771 2530 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-ca-bundle" pod "de2e34ff-5bb6-4f87-994e-5128b1af60f5" (UID: "de2e34ff-5bb6-4f87-994e-5128b1af60f5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:13:45.626703 kubelet[2530]: I0314 00:13:45.626676 2530 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2e34ff-5bb6-4f87-994e-5128b1af60f5-kube-api-access-4wbzr" pod "de2e34ff-5bb6-4f87-994e-5128b1af60f5" (UID: "de2e34ff-5bb6-4f87-994e-5128b1af60f5"). InnerVolumeSpecName "kube-api-access-4wbzr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:13:45.626978 kubelet[2530]: I0314 00:13:45.626939 2530 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-nginx-config" pod "de2e34ff-5bb6-4f87-994e-5128b1af60f5" (UID: "de2e34ff-5bb6-4f87-994e-5128b1af60f5"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:13:45.628401 kubelet[2530]: I0314 00:13:45.628383 2530 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-backend-key-pair" pod "de2e34ff-5bb6-4f87-994e-5128b1af60f5" (UID: "de2e34ff-5bb6-4f87-994e-5128b1af60f5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:13:45.628692 systemd[1]: var-lib-kubelet-pods-de2e34ff\x2d5bb6\x2d4f87\x2d994e\x2d5128b1af60f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4wbzr.mount: Deactivated successfully. Mar 14 00:13:45.632997 systemd[1]: var-lib-kubelet-pods-de2e34ff\x2d5bb6\x2d4f87\x2d994e\x2d5128b1af60f5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:13:45.719142 kubelet[2530]: I0314 00:13:45.718994 2530 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-nginx-config\") on node \"172-233-218-137\" DevicePath \"\"" Mar 14 00:13:45.719142 kubelet[2530]: I0314 00:13:45.719027 2530 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wbzr\" (UniqueName: \"kubernetes.io/projected/de2e34ff-5bb6-4f87-994e-5128b1af60f5-kube-api-access-4wbzr\") on node \"172-233-218-137\" DevicePath \"\"" Mar 14 00:13:45.719142 kubelet[2530]: I0314 00:13:45.719039 2530 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-backend-key-pair\") on node \"172-233-218-137\" DevicePath \"\"" Mar 14 00:13:45.719142 kubelet[2530]: I0314 00:13:45.719047 2530 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de2e34ff-5bb6-4f87-994e-5128b1af60f5-whisker-ca-bundle\") on node \"172-233-218-137\" DevicePath \"\"" Mar 14 00:13:46.434941 kubelet[2530]: I0314 00:13:46.434904 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:46.441552 systemd[1]: Removed slice kubepods-besteffort-podde2e34ff_5bb6_4f87_994e_5128b1af60f5.slice - libcontainer container kubepods-besteffort-podde2e34ff_5bb6_4f87_994e_5128b1af60f5.slice. Mar 14 00:13:46.510539 systemd[1]: Created slice kubepods-besteffort-pod86f88207_95b7_4b41_a953_74ed946822fb.slice - libcontainer container kubepods-besteffort-pod86f88207_95b7_4b41_a953_74ed946822fb.slice. Mar 14 00:13:46.524461 kubelet[2530]: I0314 00:13:46.524422 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/86f88207-95b7-4b41-a953-74ed946822fb-whisker-backend-key-pair\") pod \"whisker-666988c45d-5xtqr\" (UID: \"86f88207-95b7-4b41-a953-74ed946822fb\") " pod="calico-system/whisker-666988c45d-5xtqr" Mar 14 00:13:46.525936 kubelet[2530]: I0314 00:13:46.524499 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/86f88207-95b7-4b41-a953-74ed946822fb-nginx-config\") pod \"whisker-666988c45d-5xtqr\" (UID: \"86f88207-95b7-4b41-a953-74ed946822fb\") " pod="calico-system/whisker-666988c45d-5xtqr" Mar 14 00:13:46.525936 kubelet[2530]: I0314 00:13:46.524529 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqkqt\" (UniqueName: \"kubernetes.io/projected/86f88207-95b7-4b41-a953-74ed946822fb-kube-api-access-tqkqt\") pod \"whisker-666988c45d-5xtqr\" (UID: \"86f88207-95b7-4b41-a953-74ed946822fb\") " pod="calico-system/whisker-666988c45d-5xtqr" Mar 14 00:13:46.525936 kubelet[2530]: I0314 00:13:46.524842 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f88207-95b7-4b41-a953-74ed946822fb-whisker-ca-bundle\") pod \"whisker-666988c45d-5xtqr\" (UID: \"86f88207-95b7-4b41-a953-74ed946822fb\") " pod="calico-system/whisker-666988c45d-5xtqr" Mar 14 00:13:46.819893 containerd[1466]: time="2026-03-14T00:13:46.819637338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-666988c45d-5xtqr,Uid:86f88207-95b7-4b41-a953-74ed946822fb,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:46.949505 systemd-networkd[1362]: cali42fb4ba1bee: Link UP Mar 14 00:13:46.951001 systemd-networkd[1362]: cali42fb4ba1bee: Gained carrier Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.862 [ERROR][3920] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.874 [INFO][3920] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0 whisker-666988c45d- calico-system 86f88207-95b7-4b41-a953-74ed946822fb 897 0 2026-03-14 00:13:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:666988c45d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-233-218-137 whisker-666988c45d-5xtqr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali42fb4ba1bee [] [] }} ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.874 [INFO][3920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.904 [INFO][3932] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" HandleID="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Workload="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.911 [INFO][3932] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" HandleID="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Workload="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277e80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-218-137", "pod":"whisker-666988c45d-5xtqr", "timestamp":"2026-03-14 00:13:46.904710767 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001142c0)} Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.911 [INFO][3932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.911 [INFO][3932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.911 [INFO][3932] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.914 [INFO][3932] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.919 [INFO][3932] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.923 [INFO][3932] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.924 [INFO][3932] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.926 [INFO][3932] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.926 [INFO][3932] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.927 [INFO][3932] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.931 [INFO][3932] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.935 [INFO][3932] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.65/26] block=192.168.126.64/26 handle="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.935 [INFO][3932] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.65/26] handle="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" host="172-233-218-137" Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.935 [INFO][3932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:46.975837 containerd[1466]: 2026-03-14 00:13:46.935 [INFO][3932] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.65/26] IPv6=[] ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" HandleID="k8s-pod-network.8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Workload="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" Mar 14 00:13:46.976546 containerd[1466]: 2026-03-14 00:13:46.939 [INFO][3920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0", GenerateName:"whisker-666988c45d-", Namespace:"calico-system", SelfLink:"", UID:"86f88207-95b7-4b41-a953-74ed946822fb", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"666988c45d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"whisker-666988c45d-5xtqr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali42fb4ba1bee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:46.976546 containerd[1466]: 2026-03-14 00:13:46.939 [INFO][3920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.65/32] ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" Mar 14 00:13:46.976546 containerd[1466]: 2026-03-14 00:13:46.939 [INFO][3920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42fb4ba1bee ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" Mar 14 00:13:46.976546 containerd[1466]: 2026-03-14 00:13:46.953 [INFO][3920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" Mar 14 00:13:46.976546 containerd[1466]: 2026-03-14 00:13:46.953 [INFO][3920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0", GenerateName:"whisker-666988c45d-", Namespace:"calico-system", SelfLink:"", UID:"86f88207-95b7-4b41-a953-74ed946822fb", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"666988c45d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb", Pod:"whisker-666988c45d-5xtqr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali42fb4ba1bee", MAC:"ae:1e:d9:4a:e4:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:46.976546 containerd[1466]: 2026-03-14 00:13:46.968 [INFO][3920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb" Namespace="calico-system" Pod="whisker-666988c45d-5xtqr" WorkloadEndpoint="172--233--218--137-k8s-whisker--666988c45d--5xtqr-eth0" Mar 14 00:13:46.995343 containerd[1466]: time="2026-03-14T00:13:46.995243242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:46.995343 containerd[1466]: time="2026-03-14T00:13:46.995291624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:46.995343 containerd[1466]: time="2026-03-14T00:13:46.995305314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:46.995546 containerd[1466]: time="2026-03-14T00:13:46.995379417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:47.026686 systemd[1]: Started cri-containerd-8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb.scope - libcontainer container 8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb. Mar 14 00:13:47.089434 containerd[1466]: time="2026-03-14T00:13:47.089298954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-666988c45d-5xtqr,Uid:86f88207-95b7-4b41-a953-74ed946822fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb\"" Mar 14 00:13:47.093058 containerd[1466]: time="2026-03-14T00:13:47.093036113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:13:47.231072 kubelet[2530]: I0314 00:13:47.231026 2530 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="de2e34ff-5bb6-4f87-994e-5128b1af60f5" path="/var/lib/kubelet/pods/de2e34ff-5bb6-4f87-994e-5128b1af60f5/volumes" Mar 14 00:13:47.499718 kubelet[2530]: I0314 00:13:47.499537 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:47.501162 kubelet[2530]: E0314 00:13:47.500263 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:47.980682 containerd[1466]: time="2026-03-14T00:13:47.979860999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:47.981058 containerd[1466]: time="2026-03-14T00:13:47.980755102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:13:47.981728 containerd[1466]: time="2026-03-14T00:13:47.981530670Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:47.983373 containerd[1466]: time="2026-03-14T00:13:47.983334757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:47.984403 containerd[1466]: time="2026-03-14T00:13:47.984375655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 891.310731ms" Mar 14 00:13:47.984453 containerd[1466]: time="2026-03-14T00:13:47.984406217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:13:47.988698 containerd[1466]: time="2026-03-14T00:13:47.988675785Z" level=info msg="CreateContainer within sandbox \"8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:13:48.009555 containerd[1466]: time="2026-03-14T00:13:48.009531324Z" level=info msg="CreateContainer within sandbox \"8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fdfa7438bdcb26ec13b4d45922dcd90d6467d7ea54b9f8d42b70191f44ff81ef\"" Mar 14 00:13:48.010044 containerd[1466]: time="2026-03-14T00:13:48.009920678Z" level=info msg="StartContainer for \"fdfa7438bdcb26ec13b4d45922dcd90d6467d7ea54b9f8d42b70191f44ff81ef\"" Mar 14 00:13:48.044694 systemd[1]: Started cri-containerd-fdfa7438bdcb26ec13b4d45922dcd90d6467d7ea54b9f8d42b70191f44ff81ef.scope - libcontainer container fdfa7438bdcb26ec13b4d45922dcd90d6467d7ea54b9f8d42b70191f44ff81ef. Mar 14 00:13:48.094946 containerd[1466]: time="2026-03-14T00:13:48.094906397Z" level=info msg="StartContainer for \"fdfa7438bdcb26ec13b4d45922dcd90d6467d7ea54b9f8d42b70191f44ff81ef\" returns successfully" Mar 14 00:13:48.097124 containerd[1466]: time="2026-03-14T00:13:48.097094793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:13:48.223292 systemd-networkd[1362]: cali42fb4ba1bee: Gained IPv6LL Mar 14 00:13:48.415676 kernel: calico-node[4068]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:13:48.455605 kubelet[2530]: E0314 00:13:48.454930 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:49.033836 systemd-networkd[1362]: vxlan.calico: Link UP Mar 14 00:13:49.033845 systemd-networkd[1362]: vxlan.calico: Gained carrier Mar 14 00:13:49.547104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306779227.mount: Deactivated successfully. Mar 14 00:13:49.555876 containerd[1466]: time="2026-03-14T00:13:49.555823660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:49.557090 containerd[1466]: time="2026-03-14T00:13:49.557049540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:13:49.557702 containerd[1466]: time="2026-03-14T00:13:49.557678260Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:49.559863 containerd[1466]: time="2026-03-14T00:13:49.559831190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:49.560777 containerd[1466]: time="2026-03-14T00:13:49.560702488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.463577984s" Mar 14 00:13:49.560777 containerd[1466]: time="2026-03-14T00:13:49.560728969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:13:49.566082 containerd[1466]: time="2026-03-14T00:13:49.566051011Z" level=info msg="CreateContainer within sandbox \"8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:13:49.575676 containerd[1466]: time="2026-03-14T00:13:49.575637531Z" level=info msg="CreateContainer within sandbox \"8f96fc31701c6647880584d3964dceb1e1c3fb14f4d5188d317450205cee45fb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"037878c3c62fefca4e31cf73db6a0aeea8ca1d29732ed95524b0e57bbab2f93d\"" Mar 14 00:13:49.576876 containerd[1466]: time="2026-03-14T00:13:49.576123477Z" level=info msg="StartContainer for \"037878c3c62fefca4e31cf73db6a0aeea8ca1d29732ed95524b0e57bbab2f93d\"" Mar 14 00:13:49.627419 systemd[1]: Started cri-containerd-037878c3c62fefca4e31cf73db6a0aeea8ca1d29732ed95524b0e57bbab2f93d.scope - libcontainer container 037878c3c62fefca4e31cf73db6a0aeea8ca1d29732ed95524b0e57bbab2f93d. Mar 14 00:13:49.692023 containerd[1466]: time="2026-03-14T00:13:49.691790110Z" level=info msg="StartContainer for \"037878c3c62fefca4e31cf73db6a0aeea8ca1d29732ed95524b0e57bbab2f93d\" returns successfully" Mar 14 00:13:50.236418 systemd[1]: run-containerd-runc-k8s.io-037878c3c62fefca4e31cf73db6a0aeea8ca1d29732ed95524b0e57bbab2f93d-runc.tSCNQu.mount: Deactivated successfully. Mar 14 00:13:50.270796 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Mar 14 00:13:50.464325 kubelet[2530]: I0314 00:13:50.464267 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-666988c45d-5xtqr" podStartSLOduration=1.995012993 podStartE2EDuration="4.464253852s" podCreationTimestamp="2026-03-14 00:13:46 +0000 UTC" firstStartedPulling="2026-03-14 00:13:47.092473322 +0000 UTC m=+29.968395653" lastFinishedPulling="2026-03-14 00:13:49.561714181 +0000 UTC m=+32.437636512" observedRunningTime="2026-03-14 00:13:50.462946872 +0000 UTC m=+33.338869213" watchObservedRunningTime="2026-03-14 00:13:50.464253852 +0000 UTC m=+33.340176183" Mar 14 00:13:55.226024 containerd[1466]: time="2026-03-14T00:13:55.225980126Z" level=info msg="StopPodSandbox for \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\"" Mar 14 00:13:55.226458 containerd[1466]: time="2026-03-14T00:13:55.226289753Z" level=info msg="StopPodSandbox for \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\"" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.275 [INFO][4285] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.275 [INFO][4285] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" iface="eth0" netns="/var/run/netns/cni-a11c34e7-5a00-678f-f350-249453c0c50f" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.275 [INFO][4285] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" iface="eth0" netns="/var/run/netns/cni-a11c34e7-5a00-678f-f350-249453c0c50f" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.276 [INFO][4285] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" iface="eth0" netns="/var/run/netns/cni-a11c34e7-5a00-678f-f350-249453c0c50f" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.276 [INFO][4285] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.276 [INFO][4285] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.310 [INFO][4298] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.310 [INFO][4298] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.311 [INFO][4298] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.316 [WARNING][4298] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.316 [INFO][4298] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.318 [INFO][4298] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:55.326708 containerd[1466]: 2026-03-14 00:13:55.320 [INFO][4285] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:13:55.327269 containerd[1466]: time="2026-03-14T00:13:55.327140896Z" level=info msg="TearDown network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\" successfully" Mar 14 00:13:55.327269 containerd[1466]: time="2026-03-14T00:13:55.327168717Z" level=info msg="StopPodSandbox for \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\" returns successfully" Mar 14 00:13:55.329115 systemd[1]: run-netns-cni\x2da11c34e7\x2d5a00\x2d678f\x2df350\x2d249453c0c50f.mount: Deactivated successfully. Mar 14 00:13:55.331988 containerd[1466]: time="2026-03-14T00:13:55.331955041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlvrd,Uid:efc9842b-5041-4fb5-bc21-b23964d856d2,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.279 [INFO][4284] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.280 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" iface="eth0" netns="/var/run/netns/cni-3e8d6a98-2555-a3d7-45f5-15359c708166" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.280 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" iface="eth0" netns="/var/run/netns/cni-3e8d6a98-2555-a3d7-45f5-15359c708166" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.281 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" iface="eth0" netns="/var/run/netns/cni-3e8d6a98-2555-a3d7-45f5-15359c708166" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.281 [INFO][4284] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.281 [INFO][4284] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.315 [INFO][4303] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.315 [INFO][4303] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.318 [INFO][4303] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.326 [WARNING][4303] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.326 [INFO][4303] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.328 [INFO][4303] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:55.334500 containerd[1466]: 2026-03-14 00:13:55.331 [INFO][4284] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:13:55.335807 containerd[1466]: time="2026-03-14T00:13:55.335673231Z" level=info msg="TearDown network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\" successfully" Mar 14 00:13:55.336101 containerd[1466]: time="2026-03-14T00:13:55.335807034Z" level=info msg="StopPodSandbox for \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\" returns successfully" Mar 14 00:13:55.336941 kubelet[2530]: E0314 00:13:55.336922 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:55.340288 containerd[1466]: time="2026-03-14T00:13:55.339824731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pngzk,Uid:b627313e-4d3d-42f3-aad2-4c1df6199113,Namespace:kube-system,Attempt:1,}" Mar 14 00:13:55.341315 systemd[1]: run-netns-cni\x2d3e8d6a98\x2d2555\x2da3d7\x2d45f5\x2d15359c708166.mount: Deactivated successfully. Mar 14 00:13:55.473783 systemd-networkd[1362]: cali43a6f8fa5ca: Link UP Mar 14 00:13:55.474738 systemd-networkd[1362]: cali43a6f8fa5ca: Gained carrier Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.390 [INFO][4311] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-csi--node--driver--mlvrd-eth0 csi-node-driver- calico-system efc9842b-5041-4fb5-bc21-b23964d856d2 944 0 2026-03-14 00:13:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-233-218-137 csi-node-driver-mlvrd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali43a6f8fa5ca [] [] }} ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.390 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.427 [INFO][4335] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" HandleID="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.432 [INFO][4335] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" HandleID="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-218-137", "pod":"csi-node-driver-mlvrd", "timestamp":"2026-03-14 00:13:55.427220193 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000cb340)} Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.432 [INFO][4335] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.432 [INFO][4335] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.432 [INFO][4335] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.434 [INFO][4335] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.440 [INFO][4335] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.443 [INFO][4335] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.445 [INFO][4335] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.446 [INFO][4335] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.446 [INFO][4335] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.447 [INFO][4335] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.451 [INFO][4335] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.455 [INFO][4335] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.66/26] block=192.168.126.64/26 handle="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.455 [INFO][4335] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.66/26] handle="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" host="172-233-218-137" Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.455 [INFO][4335] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:55.490371 containerd[1466]: 2026-03-14 00:13:55.455 [INFO][4335] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.66/26] IPv6=[] ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" HandleID="k8s-pod-network.48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.490853 containerd[1466]: 2026-03-14 00:13:55.465 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-csi--node--driver--mlvrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efc9842b-5041-4fb5-bc21-b23964d856d2", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"csi-node-driver-mlvrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43a6f8fa5ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:55.490853 containerd[1466]: 2026-03-14 00:13:55.465 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.66/32] ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.490853 containerd[1466]: 2026-03-14 00:13:55.465 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43a6f8fa5ca ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.490853 containerd[1466]: 2026-03-14 00:13:55.475 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.490853 containerd[1466]: 2026-03-14 00:13:55.475 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-csi--node--driver--mlvrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efc9842b-5041-4fb5-bc21-b23964d856d2", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc", Pod:"csi-node-driver-mlvrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43a6f8fa5ca", MAC:"02:85:83:b8:c8:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:55.490853 containerd[1466]: 2026-03-14 00:13:55.486 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc" Namespace="calico-system" Pod="csi-node-driver-mlvrd" WorkloadEndpoint="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:13:55.517627 containerd[1466]: time="2026-03-14T00:13:55.517510527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:55.518271 containerd[1466]: time="2026-03-14T00:13:55.518096961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:55.518271 containerd[1466]: time="2026-03-14T00:13:55.518112631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:55.518271 containerd[1466]: time="2026-03-14T00:13:55.518191543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:55.542492 systemd[1]: Started cri-containerd-48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc.scope - libcontainer container 48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc. Mar 14 00:13:55.579740 systemd-networkd[1362]: calic5a5a0acaaf: Link UP Mar 14 00:13:55.580179 systemd-networkd[1362]: calic5a5a0acaaf: Gained carrier Mar 14 00:13:55.587826 containerd[1466]: time="2026-03-14T00:13:55.587786079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlvrd,Uid:efc9842b-5041-4fb5-bc21-b23964d856d2,Namespace:calico-system,Attempt:1,} returns sandbox id \"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc\"" Mar 14 00:13:55.592588 containerd[1466]: time="2026-03-14T00:13:55.591494789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.404 [INFO][4320] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0 coredns-7d764666f9- kube-system b627313e-4d3d-42f3-aad2-4c1df6199113 945 0 2026-03-14 00:13:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-218-137 coredns-7d764666f9-pngzk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic5a5a0acaaf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.404 [INFO][4320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.435 [INFO][4340] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" HandleID="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.440 [INFO][4340] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" HandleID="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-218-137", "pod":"coredns-7d764666f9-pngzk", "timestamp":"2026-03-14 00:13:55.435067673 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002691e0)} Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.440 [INFO][4340] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.456 [INFO][4340] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.456 [INFO][4340] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.535 [INFO][4340] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.544 [INFO][4340] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.551 [INFO][4340] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.553 [INFO][4340] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.554 [INFO][4340] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.554 [INFO][4340] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.556 [INFO][4340] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3 Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.560 [INFO][4340] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.566 [INFO][4340] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.67/26] block=192.168.126.64/26 handle="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.566 [INFO][4340] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.67/26] handle="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" host="172-233-218-137" Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.567 [INFO][4340] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:55.602742 containerd[1466]: 2026-03-14 00:13:55.567 [INFO][4340] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.67/26] IPv6=[] ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" HandleID="k8s-pod-network.0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.603185 containerd[1466]: 2026-03-14 00:13:55.571 [INFO][4320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b627313e-4d3d-42f3-aad2-4c1df6199113", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"coredns-7d764666f9-pngzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5a5a0acaaf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:55.603185 containerd[1466]: 2026-03-14 00:13:55.571 [INFO][4320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.67/32] ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.603185 containerd[1466]: 2026-03-14 00:13:55.571 [INFO][4320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5a5a0acaaf ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.603185 containerd[1466]: 2026-03-14 00:13:55.579 [INFO][4320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.603185 containerd[1466]: 2026-03-14 00:13:55.581 [INFO][4320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b627313e-4d3d-42f3-aad2-4c1df6199113", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3", Pod:"coredns-7d764666f9-pngzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5a5a0acaaf", MAC:"8e:e2:37:ee:35:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:55.603185 containerd[1466]: 2026-03-14 00:13:55.596 [INFO][4320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3" Namespace="kube-system" Pod="coredns-7d764666f9-pngzk" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:13:55.629872 containerd[1466]: time="2026-03-14T00:13:55.629805779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:55.629946 containerd[1466]: time="2026-03-14T00:13:55.629881231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:55.629946 containerd[1466]: time="2026-03-14T00:13:55.629910191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:55.630067 containerd[1466]: time="2026-03-14T00:13:55.629995303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:55.666705 systemd[1]: Started cri-containerd-0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3.scope - libcontainer container 0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3. Mar 14 00:13:55.710174 containerd[1466]: time="2026-03-14T00:13:55.710120578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pngzk,Uid:b627313e-4d3d-42f3-aad2-4c1df6199113,Namespace:kube-system,Attempt:1,} returns sandbox id \"0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3\"" Mar 14 00:13:55.711544 kubelet[2530]: E0314 00:13:55.711379 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:55.716729 containerd[1466]: time="2026-03-14T00:13:55.715946664Z" level=info msg="CreateContainer within sandbox \"0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:13:55.725531 containerd[1466]: time="2026-03-14T00:13:55.725441209Z" level=info msg="CreateContainer within sandbox \"0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"625fa3b28e877bd2474cb466539e75b9bbf3b7cd2cf417a8bd4bfc80b1455ab6\"" Mar 14 00:13:55.726238 containerd[1466]: time="2026-03-14T00:13:55.726033403Z" level=info msg="StartContainer for \"625fa3b28e877bd2474cb466539e75b9bbf3b7cd2cf417a8bd4bfc80b1455ab6\"" Mar 14 00:13:55.756698 systemd[1]: Started cri-containerd-625fa3b28e877bd2474cb466539e75b9bbf3b7cd2cf417a8bd4bfc80b1455ab6.scope - libcontainer container 625fa3b28e877bd2474cb466539e75b9bbf3b7cd2cf417a8bd4bfc80b1455ab6. Mar 14 00:13:55.783945 containerd[1466]: time="2026-03-14T00:13:55.783911666Z" level=info msg="StartContainer for \"625fa3b28e877bd2474cb466539e75b9bbf3b7cd2cf417a8bd4bfc80b1455ab6\" returns successfully" Mar 14 00:13:56.225617 containerd[1466]: time="2026-03-14T00:13:56.225456207Z" level=info msg="StopPodSandbox for \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\"" Mar 14 00:13:56.251404 containerd[1466]: time="2026-03-14T00:13:56.251331271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:56.252848 containerd[1466]: time="2026-03-14T00:13:56.252808971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:13:56.254837 containerd[1466]: time="2026-03-14T00:13:56.253799681Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:56.255890 containerd[1466]: time="2026-03-14T00:13:56.255869283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:56.259021 containerd[1466]: time="2026-03-14T00:13:56.258998966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 667.477946ms" Mar 14 00:13:56.259102 containerd[1466]: time="2026-03-14T00:13:56.259085598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:13:56.264058 containerd[1466]: time="2026-03-14T00:13:56.264017088Z" level=info msg="CreateContainer within sandbox \"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:13:56.282783 containerd[1466]: time="2026-03-14T00:13:56.282747556Z" level=info msg="CreateContainer within sandbox \"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8419a4617b983b6474f7bd620c4007044d3ff5a41c3f2ed7e3b7f8622ebc24a9\"" Mar 14 00:13:56.283765 containerd[1466]: time="2026-03-14T00:13:56.283726986Z" level=info msg="StartContainer for \"8419a4617b983b6474f7bd620c4007044d3ff5a41c3f2ed7e3b7f8622ebc24a9\"" Mar 14 00:13:56.343696 systemd[1]: Started cri-containerd-8419a4617b983b6474f7bd620c4007044d3ff5a41c3f2ed7e3b7f8622ebc24a9.scope - libcontainer container 8419a4617b983b6474f7bd620c4007044d3ff5a41c3f2ed7e3b7f8622ebc24a9. Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.300 [INFO][4526] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.301 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" iface="eth0" netns="/var/run/netns/cni-3d4ba847-7391-f1bb-fb1e-915722e691d8" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.302 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" iface="eth0" netns="/var/run/netns/cni-3d4ba847-7391-f1bb-fb1e-915722e691d8" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.303 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" iface="eth0" netns="/var/run/netns/cni-3d4ba847-7391-f1bb-fb1e-915722e691d8" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.303 [INFO][4526] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.303 [INFO][4526] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.351 [INFO][4541] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.352 [INFO][4541] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.352 [INFO][4541] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.358 [WARNING][4541] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.358 [INFO][4541] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.360 [INFO][4541] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.365416 containerd[1466]: 2026-03-14 00:13:56.362 [INFO][4526] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:13:56.366546 containerd[1466]: time="2026-03-14T00:13:56.366086163Z" level=info msg="TearDown network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\" successfully" Mar 14 00:13:56.366546 containerd[1466]: time="2026-03-14T00:13:56.366113634Z" level=info msg="StopPodSandbox for \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\" returns successfully" Mar 14 00:13:56.368527 systemd[1]: run-netns-cni\x2d3d4ba847\x2d7391\x2df1bb\x2dfb1e\x2d915722e691d8.mount: Deactivated successfully. Mar 14 00:13:56.371133 kubelet[2530]: E0314 00:13:56.370288 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:56.372200 containerd[1466]: time="2026-03-14T00:13:56.371702526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-h2qg5,Uid:6c418ad6-e69b-4b1f-bded-2e2a531bde69,Namespace:kube-system,Attempt:1,}" Mar 14 00:13:56.407035 containerd[1466]: time="2026-03-14T00:13:56.406999201Z" level=info msg="StartContainer for \"8419a4617b983b6474f7bd620c4007044d3ff5a41c3f2ed7e3b7f8622ebc24a9\" returns successfully" Mar 14 00:13:56.408761 containerd[1466]: time="2026-03-14T00:13:56.408621923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:13:56.470048 kubelet[2530]: E0314 00:13:56.470010 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:56.481511 kubelet[2530]: I0314 00:13:56.480985 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pngzk" podStartSLOduration=33.480974748 podStartE2EDuration="33.480974748s" podCreationTimestamp="2026-03-14 00:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:56.480844625 +0000 UTC m=+39.356766976" watchObservedRunningTime="2026-03-14 00:13:56.480974748 +0000 UTC m=+39.356897089" Mar 14 00:13:56.543787 systemd-networkd[1362]: caliaa4ac6d2340: Link UP Mar 14 00:13:56.544010 systemd-networkd[1362]: caliaa4ac6d2340: Gained carrier Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.440 [INFO][4564] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0 coredns-7d764666f9- kube-system 6c418ad6-e69b-4b1f-bded-2e2a531bde69 965 0 2026-03-14 00:13:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-218-137 coredns-7d764666f9-h2qg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa4ac6d2340 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.441 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.487 [INFO][4585] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" HandleID="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.504 [INFO][4585] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" HandleID="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef4c0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-218-137", "pod":"coredns-7d764666f9-h2qg5", "timestamp":"2026-03-14 00:13:56.487975369 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000230000)} Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.504 [INFO][4585] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.504 [INFO][4585] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.504 [INFO][4585] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.509 [INFO][4585] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.516 [INFO][4585] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.521 [INFO][4585] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.523 [INFO][4585] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.525 [INFO][4585] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.525 [INFO][4585] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.527 [INFO][4585] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890 Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.531 [INFO][4585] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.537 [INFO][4585] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.68/26] block=192.168.126.64/26 handle="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.537 [INFO][4585] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.68/26] handle="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" host="172-233-218-137" Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.537 [INFO][4585] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.562609 containerd[1466]: 2026-03-14 00:13:56.537 [INFO][4585] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.68/26] IPv6=[] ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" HandleID="k8s-pod-network.d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.563098 containerd[1466]: 2026-03-14 00:13:56.540 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c418ad6-e69b-4b1f-bded-2e2a531bde69", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"coredns-7d764666f9-h2qg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa4ac6d2340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.563098 containerd[1466]: 2026-03-14 00:13:56.540 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.68/32] ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.563098 containerd[1466]: 2026-03-14 00:13:56.540 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa4ac6d2340 ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.563098 containerd[1466]: 2026-03-14 00:13:56.543 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.563098 containerd[1466]: 2026-03-14 00:13:56.544 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c418ad6-e69b-4b1f-bded-2e2a531bde69", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890", Pod:"coredns-7d764666f9-h2qg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa4ac6d2340", MAC:"ae:75:28:5e:47:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.563098 containerd[1466]: 2026-03-14 00:13:56.553 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890" Namespace="kube-system" Pod="coredns-7d764666f9-h2qg5" WorkloadEndpoint="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:13:56.595098 containerd[1466]: time="2026-03-14T00:13:56.594781391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:56.595098 containerd[1466]: time="2026-03-14T00:13:56.594857792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:56.595098 containerd[1466]: time="2026-03-14T00:13:56.594882323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:56.595098 containerd[1466]: time="2026-03-14T00:13:56.594989655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:56.637720 systemd[1]: Started cri-containerd-d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890.scope - libcontainer container d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890. Mar 14 00:13:56.688968 containerd[1466]: time="2026-03-14T00:13:56.688784973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-h2qg5,Uid:6c418ad6-e69b-4b1f-bded-2e2a531bde69,Namespace:kube-system,Attempt:1,} returns sandbox id \"d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890\"" Mar 14 00:13:56.690298 kubelet[2530]: E0314 00:13:56.690220 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:56.697685 containerd[1466]: time="2026-03-14T00:13:56.697638512Z" level=info msg="CreateContainer within sandbox \"d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:13:56.712250 containerd[1466]: time="2026-03-14T00:13:56.712198307Z" level=info msg="CreateContainer within sandbox \"d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75405d28b336810be080fb73cc6d6307cdbdf36cb0f33012efcfdabcdfe18e24\"" Mar 14 00:13:56.714875 containerd[1466]: time="2026-03-14T00:13:56.712874970Z" level=info msg="StartContainer for \"75405d28b336810be080fb73cc6d6307cdbdf36cb0f33012efcfdabcdfe18e24\"" Mar 14 00:13:56.745748 systemd[1]: Started cri-containerd-75405d28b336810be080fb73cc6d6307cdbdf36cb0f33012efcfdabcdfe18e24.scope - libcontainer container 75405d28b336810be080fb73cc6d6307cdbdf36cb0f33012efcfdabcdfe18e24. Mar 14 00:13:56.785607 containerd[1466]: time="2026-03-14T00:13:56.785493949Z" level=info msg="StartContainer for \"75405d28b336810be080fb73cc6d6307cdbdf36cb0f33012efcfdabcdfe18e24\" returns successfully" Mar 14 00:13:57.228196 containerd[1466]: time="2026-03-14T00:13:57.227544013Z" level=info msg="StopPodSandbox for \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\"" Mar 14 00:13:57.228737 containerd[1466]: time="2026-03-14T00:13:57.228650374Z" level=info msg="StopPodSandbox for \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\"" Mar 14 00:13:57.234988 containerd[1466]: time="2026-03-14T00:13:57.234953714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:57.236744 containerd[1466]: time="2026-03-14T00:13:57.236705027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:13:57.239712 containerd[1466]: time="2026-03-14T00:13:57.239664762Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:57.250371 containerd[1466]: time="2026-03-14T00:13:57.250312874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:57.251315 containerd[1466]: time="2026-03-14T00:13:57.251247842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 842.590938ms" Mar 14 00:13:57.251315 containerd[1466]: time="2026-03-14T00:13:57.251289533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:13:57.261532 containerd[1466]: time="2026-03-14T00:13:57.261447724Z" level=info msg="CreateContainer within sandbox \"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:13:57.285730 containerd[1466]: time="2026-03-14T00:13:57.283758996Z" level=info msg="CreateContainer within sandbox \"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6979b55fa5a5b9598b7f00d549dbd8ee80a43dcc5951facf03cda4d37387da22\"" Mar 14 00:13:57.286798 containerd[1466]: time="2026-03-14T00:13:57.286037370Z" level=info msg="StartContainer for \"6979b55fa5a5b9598b7f00d549dbd8ee80a43dcc5951facf03cda4d37387da22\"" Mar 14 00:13:57.359549 systemd[1]: Started cri-containerd-6979b55fa5a5b9598b7f00d549dbd8ee80a43dcc5951facf03cda4d37387da22.scope - libcontainer container 6979b55fa5a5b9598b7f00d549dbd8ee80a43dcc5951facf03cda4d37387da22. Mar 14 00:13:57.437954 systemd-networkd[1362]: cali43a6f8fa5ca: Gained IPv6LL Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.357 [INFO][4717] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.359 [INFO][4717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" iface="eth0" netns="/var/run/netns/cni-c3ac60a0-ede5-a78b-9180-6afb3bbd77d6" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.363 [INFO][4717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" iface="eth0" netns="/var/run/netns/cni-c3ac60a0-ede5-a78b-9180-6afb3bbd77d6" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.367 [INFO][4717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" iface="eth0" netns="/var/run/netns/cni-c3ac60a0-ede5-a78b-9180-6afb3bbd77d6" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.367 [INFO][4717] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.367 [INFO][4717] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.426 [INFO][4748] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.426 [INFO][4748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.426 [INFO][4748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.444 [WARNING][4748] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.444 [INFO][4748] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.447 [INFO][4748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.483367 containerd[1466]: 2026-03-14 00:13:57.467 [INFO][4717] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:13:57.490722 kubelet[2530]: E0314 00:13:57.487022 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:57.491462 containerd[1466]: time="2026-03-14T00:13:57.487706532Z" level=info msg="TearDown network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\" successfully" Mar 14 00:13:57.491462 containerd[1466]: time="2026-03-14T00:13:57.487775694Z" level=info msg="StopPodSandbox for \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\" returns successfully" Mar 14 00:13:57.493426 systemd[1]: run-netns-cni\x2dc3ac60a0\x2dede5\x2da78b\x2d9180\x2d6afb3bbd77d6.mount: Deactivated successfully. Mar 14 00:13:57.515619 containerd[1466]: time="2026-03-14T00:13:57.508359924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-kgmhm,Uid:890dbf4b-5710-47dc-9a3c-1eca2584bc93,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:57.521864 kubelet[2530]: I0314 00:13:57.521046 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-h2qg5" podStartSLOduration=34.521030943 podStartE2EDuration="34.521030943s" podCreationTimestamp="2026-03-14 00:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:57.520969232 +0000 UTC m=+40.396891613" watchObservedRunningTime="2026-03-14 00:13:57.521030943 +0000 UTC m=+40.396953274" Mar 14 00:13:57.524112 kubelet[2530]: E0314 00:13:57.524072 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:57.555077 containerd[1466]: time="2026-03-14T00:13:57.551847535Z" level=info msg="StartContainer for \"6979b55fa5a5b9598b7f00d549dbd8ee80a43dcc5951facf03cda4d37387da22\" returns successfully" Mar 14 00:13:57.566350 systemd-networkd[1362]: calic5a5a0acaaf: Gained IPv6LL Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.365 [INFO][4716] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.366 [INFO][4716] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" iface="eth0" netns="/var/run/netns/cni-8c12a2c1-8a16-f93b-3c2a-c68a1c7a6129" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.367 [INFO][4716] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" iface="eth0" netns="/var/run/netns/cni-8c12a2c1-8a16-f93b-3c2a-c68a1c7a6129" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.370 [INFO][4716] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" iface="eth0" netns="/var/run/netns/cni-8c12a2c1-8a16-f93b-3c2a-c68a1c7a6129" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.370 [INFO][4716] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.370 [INFO][4716] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.469 [INFO][4750] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.470 [INFO][4750] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.471 [INFO][4750] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.502 [WARNING][4750] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.502 [INFO][4750] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.526 [INFO][4750] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.584673 containerd[1466]: 2026-03-14 00:13:57.549 [INFO][4716] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:13:57.586378 containerd[1466]: time="2026-03-14T00:13:57.585530822Z" level=info msg="TearDown network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\" successfully" Mar 14 00:13:57.586378 containerd[1466]: time="2026-03-14T00:13:57.585616964Z" level=info msg="StopPodSandbox for \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\" returns successfully" Mar 14 00:13:57.590354 containerd[1466]: time="2026-03-14T00:13:57.589443826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-8h4g2,Uid:9494d07f-08cc-408c-af89-27df5fb41f1e,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:57.800730 systemd-networkd[1362]: caliafc369447c9: Link UP Mar 14 00:13:57.802181 systemd-networkd[1362]: caliafc369447c9: Gained carrier Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.676 [INFO][4779] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0 calico-apiserver-6dcdbf46fb- calico-system 890dbf4b-5710-47dc-9a3c-1eca2584bc93 990 0 2026-03-14 00:13:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dcdbf46fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-233-218-137 calico-apiserver-6dcdbf46fb-kgmhm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliafc369447c9 [] [] }} ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.677 [INFO][4779] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.730 [INFO][4812] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" HandleID="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.745 [INFO][4812] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" HandleID="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-218-137", "pod":"calico-apiserver-6dcdbf46fb-kgmhm", "timestamp":"2026-03-14 00:13:57.730190888 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000544f20)} Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.745 [INFO][4812] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.745 [INFO][4812] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.745 [INFO][4812] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.750 [INFO][4812] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.757 [INFO][4812] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.765 [INFO][4812] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.771 [INFO][4812] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.778 [INFO][4812] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.778 [INFO][4812] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.780 [INFO][4812] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748 Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.786 [INFO][4812] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.793 [INFO][4812] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.69/26] block=192.168.126.64/26 handle="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.793 [INFO][4812] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.69/26] handle="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" host="172-233-218-137" Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.793 [INFO][4812] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.821655 containerd[1466]: 2026-03-14 00:13:57.793 [INFO][4812] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.69/26] IPv6=[] ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" HandleID="k8s-pod-network.5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.822140 containerd[1466]: 2026-03-14 00:13:57.796 [INFO][4779] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"890dbf4b-5710-47dc-9a3c-1eca2584bc93", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"calico-apiserver-6dcdbf46fb-kgmhm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliafc369447c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.822140 containerd[1466]: 2026-03-14 00:13:57.796 [INFO][4779] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.69/32] ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.822140 containerd[1466]: 2026-03-14 00:13:57.796 [INFO][4779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafc369447c9 ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.822140 containerd[1466]: 2026-03-14 00:13:57.803 [INFO][4779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.822140 containerd[1466]: 2026-03-14 00:13:57.803 [INFO][4779] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"890dbf4b-5710-47dc-9a3c-1eca2584bc93", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748", Pod:"calico-apiserver-6dcdbf46fb-kgmhm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliafc369447c9", MAC:"de:16:81:3b:0f:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.822140 containerd[1466]: 2026-03-14 00:13:57.814 [INFO][4779] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-kgmhm" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:13:57.847170 containerd[1466]: time="2026-03-14T00:13:57.846895674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:57.847170 containerd[1466]: time="2026-03-14T00:13:57.846959026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:57.847170 containerd[1466]: time="2026-03-14T00:13:57.846972226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:57.847736 containerd[1466]: time="2026-03-14T00:13:57.847144369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:57.878690 systemd[1]: Started cri-containerd-5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748.scope - libcontainer container 5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748. Mar 14 00:13:57.896556 systemd-networkd[1362]: cali17d4be7279a: Link UP Mar 14 00:13:57.897930 systemd-networkd[1362]: cali17d4be7279a: Gained carrier Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.714 [INFO][4797] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0 calico-apiserver-6dcdbf46fb- calico-system 9494d07f-08cc-408c-af89-27df5fb41f1e 991 0 2026-03-14 00:13:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dcdbf46fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-233-218-137 calico-apiserver-6dcdbf46fb-8h4g2 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali17d4be7279a [] [] }} ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.714 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.776 [INFO][4820] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" HandleID="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.786 [INFO][4820] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" HandleID="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbae0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-218-137", "pod":"calico-apiserver-6dcdbf46fb-8h4g2", "timestamp":"2026-03-14 00:13:57.776696917 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003771e0)} Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.786 [INFO][4820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.793 [INFO][4820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.793 [INFO][4820] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.853 [INFO][4820] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.857 [INFO][4820] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.865 [INFO][4820] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.870 [INFO][4820] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.873 [INFO][4820] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.873 [INFO][4820] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.877 [INFO][4820] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2 Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.882 [INFO][4820] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.888 [INFO][4820] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.70/26] block=192.168.126.64/26 handle="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.888 [INFO][4820] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.70/26] handle="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" host="172-233-218-137" Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.888 [INFO][4820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.920467 containerd[1466]: 2026-03-14 00:13:57.888 [INFO][4820] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.70/26] IPv6=[] ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" HandleID="k8s-pod-network.07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.921015 containerd[1466]: 2026-03-14 00:13:57.891 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"9494d07f-08cc-408c-af89-27df5fb41f1e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"calico-apiserver-6dcdbf46fb-8h4g2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali17d4be7279a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.921015 containerd[1466]: 2026-03-14 00:13:57.892 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.70/32] ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.921015 containerd[1466]: 2026-03-14 00:13:57.892 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17d4be7279a ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.921015 containerd[1466]: 2026-03-14 00:13:57.899 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.921015 containerd[1466]: 2026-03-14 00:13:57.900 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"9494d07f-08cc-408c-af89-27df5fb41f1e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2", Pod:"calico-apiserver-6dcdbf46fb-8h4g2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali17d4be7279a", MAC:"ca:e0:59:80:0a:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.921015 containerd[1466]: 2026-03-14 00:13:57.910 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2" Namespace="calico-system" Pod="calico-apiserver-6dcdbf46fb-8h4g2" WorkloadEndpoint="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:13:57.941840 containerd[1466]: time="2026-03-14T00:13:57.941442973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:57.943629 containerd[1466]: time="2026-03-14T00:13:57.942607294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:57.943629 containerd[1466]: time="2026-03-14T00:13:57.942625804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:57.943629 containerd[1466]: time="2026-03-14T00:13:57.942705346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:57.978990 systemd[1]: Started cri-containerd-07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2.scope - libcontainer container 07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2. Mar 14 00:13:57.981775 containerd[1466]: time="2026-03-14T00:13:57.981726713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-kgmhm,Uid:890dbf4b-5710-47dc-9a3c-1eca2584bc93,Namespace:calico-system,Attempt:1,} returns sandbox id \"5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748\"" Mar 14 00:13:57.984256 containerd[1466]: time="2026-03-14T00:13:57.984180920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:13:58.038597 containerd[1466]: time="2026-03-14T00:13:58.038089273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dcdbf46fb-8h4g2,Uid:9494d07f-08cc-408c-af89-27df5fb41f1e,Namespace:calico-system,Attempt:1,} returns sandbox id \"07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2\"" Mar 14 00:13:58.226086 containerd[1466]: time="2026-03-14T00:13:58.225979082Z" level=info msg="StopPodSandbox for \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\"" Mar 14 00:13:58.271977 systemd-networkd[1362]: caliaa4ac6d2340: Gained IPv6LL Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.278 [INFO][4953] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.278 [INFO][4953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" iface="eth0" netns="/var/run/netns/cni-a05492e5-49fc-1f11-0fbf-474959fbdf85" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.279 [INFO][4953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" iface="eth0" netns="/var/run/netns/cni-a05492e5-49fc-1f11-0fbf-474959fbdf85" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.279 [INFO][4953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" iface="eth0" netns="/var/run/netns/cni-a05492e5-49fc-1f11-0fbf-474959fbdf85" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.279 [INFO][4953] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.280 [INFO][4953] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.303 [INFO][4961] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.304 [INFO][4961] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.304 [INFO][4961] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.310 [WARNING][4961] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.310 [INFO][4961] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.311 [INFO][4961] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:58.316817 containerd[1466]: 2026-03-14 00:13:58.314 [INFO][4953] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:13:58.317648 containerd[1466]: time="2026-03-14T00:13:58.317619330Z" level=info msg="TearDown network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\" successfully" Mar 14 00:13:58.317648 containerd[1466]: time="2026-03-14T00:13:58.317645460Z" level=info msg="StopPodSandbox for \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\" returns successfully" Mar 14 00:13:58.319498 containerd[1466]: time="2026-03-14T00:13:58.319457862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b54b945f5-vqsjc,Uid:5071b63d-65a6-4318-a3dc-58009573e7ce,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:58.330779 kubelet[2530]: I0314 00:13:58.330757 2530 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:13:58.331096 kubelet[2530]: I0314 00:13:58.330969 2530 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:13:58.337173 systemd[1]: run-netns-cni\x2d8c12a2c1\x2d8a16\x2df93b\x2d3c2a\x2dc68a1c7a6129.mount: Deactivated successfully. Mar 14 00:13:58.337312 systemd[1]: run-netns-cni\x2da05492e5\x2d49fc\x2d1f11\x2d0fbf\x2d474959fbdf85.mount: Deactivated successfully. Mar 14 00:13:58.446693 systemd-networkd[1362]: cali6a42adcbe45: Link UP Mar 14 00:13:58.448937 systemd-networkd[1362]: cali6a42adcbe45: Gained carrier Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.380 [INFO][4971] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0 calico-kube-controllers-7b54b945f5- calico-system 5071b63d-65a6-4318-a3dc-58009573e7ce 1014 0 2026-03-14 00:13:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b54b945f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-233-218-137 calico-kube-controllers-7b54b945f5-vqsjc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6a42adcbe45 [] [] }} ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.380 [INFO][4971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.404 [INFO][4982] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" HandleID="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.410 [INFO][4982] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" HandleID="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002777c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-218-137", "pod":"calico-kube-controllers-7b54b945f5-vqsjc", "timestamp":"2026-03-14 00:13:58.404724139 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002dd760)} Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.411 [INFO][4982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.411 [INFO][4982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.411 [INFO][4982] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.413 [INFO][4982] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.417 [INFO][4982] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.421 [INFO][4982] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.422 [INFO][4982] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.424 [INFO][4982] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.424 [INFO][4982] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.425 [INFO][4982] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4 Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.429 [INFO][4982] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.433 [INFO][4982] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.71/26] block=192.168.126.64/26 handle="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.434 [INFO][4982] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.71/26] handle="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" host="172-233-218-137" Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.434 [INFO][4982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:58.470755 containerd[1466]: 2026-03-14 00:13:58.434 [INFO][4982] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.71/26] IPv6=[] ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" HandleID="k8s-pod-network.096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.471358 containerd[1466]: 2026-03-14 00:13:58.437 [INFO][4971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0", GenerateName:"calico-kube-controllers-7b54b945f5-", Namespace:"calico-system", SelfLink:"", UID:"5071b63d-65a6-4318-a3dc-58009573e7ce", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b54b945f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"calico-kube-controllers-7b54b945f5-vqsjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a42adcbe45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:58.471358 containerd[1466]: 2026-03-14 00:13:58.439 [INFO][4971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.71/32] ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.471358 containerd[1466]: 2026-03-14 00:13:58.439 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a42adcbe45 ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.471358 containerd[1466]: 2026-03-14 00:13:58.449 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.471358 containerd[1466]: 2026-03-14 00:13:58.450 [INFO][4971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0", GenerateName:"calico-kube-controllers-7b54b945f5-", Namespace:"calico-system", SelfLink:"", UID:"5071b63d-65a6-4318-a3dc-58009573e7ce", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b54b945f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4", Pod:"calico-kube-controllers-7b54b945f5-vqsjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a42adcbe45", MAC:"f6:db:7a:13:44:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:58.471358 containerd[1466]: 2026-03-14 00:13:58.459 [INFO][4971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4" Namespace="calico-system" Pod="calico-kube-controllers-7b54b945f5-vqsjc" WorkloadEndpoint="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:13:58.506795 containerd[1466]: time="2026-03-14T00:13:58.506301174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:58.506795 containerd[1466]: time="2026-03-14T00:13:58.506511066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:58.506795 containerd[1466]: time="2026-03-14T00:13:58.506527717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:58.506795 containerd[1466]: time="2026-03-14T00:13:58.506632309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:58.541807 kubelet[2530]: E0314 00:13:58.541763 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:58.543376 kubelet[2530]: E0314 00:13:58.542069 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:58.553683 systemd[1]: Started cri-containerd-096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4.scope - libcontainer container 096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4. Mar 14 00:13:58.641499 containerd[1466]: time="2026-03-14T00:13:58.641461000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b54b945f5-vqsjc,Uid:5071b63d-65a6-4318-a3dc-58009573e7ce,Namespace:calico-system,Attempt:1,} returns sandbox id \"096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4\"" Mar 14 00:13:58.973779 systemd-networkd[1362]: cali17d4be7279a: Gained IPv6LL Mar 14 00:13:59.226631 containerd[1466]: time="2026-03-14T00:13:59.226438931Z" level=info msg="StopPodSandbox for \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\"" Mar 14 00:13:59.275999 kubelet[2530]: I0314 00:13:59.275611 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-mlvrd" podStartSLOduration=23.613063374 podStartE2EDuration="25.275596931s" podCreationTimestamp="2026-03-14 00:13:34 +0000 UTC" firstStartedPulling="2026-03-14 00:13:55.590872446 +0000 UTC m=+38.466794777" lastFinishedPulling="2026-03-14 00:13:57.253406003 +0000 UTC m=+40.129328334" observedRunningTime="2026-03-14 00:13:58.562024568 +0000 UTC m=+41.437946899" watchObservedRunningTime="2026-03-14 00:13:59.275596931 +0000 UTC m=+42.151519282" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.275 [INFO][5066] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.275 [INFO][5066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" iface="eth0" netns="/var/run/netns/cni-85ee96e9-41c4-23ae-5268-d1f97de208f5" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.276 [INFO][5066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" iface="eth0" netns="/var/run/netns/cni-85ee96e9-41c4-23ae-5268-d1f97de208f5" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.277 [INFO][5066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" iface="eth0" netns="/var/run/netns/cni-85ee96e9-41c4-23ae-5268-d1f97de208f5" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.277 [INFO][5066] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.277 [INFO][5066] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.307 [INFO][5074] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.308 [INFO][5074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.308 [INFO][5074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.316 [WARNING][5074] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.316 [INFO][5074] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.318 [INFO][5074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:59.331740 containerd[1466]: 2026-03-14 00:13:59.324 [INFO][5066] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:13:59.334093 containerd[1466]: time="2026-03-14T00:13:59.332018912Z" level=info msg="TearDown network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\" successfully" Mar 14 00:13:59.334093 containerd[1466]: time="2026-03-14T00:13:59.332055283Z" level=info msg="StopPodSandbox for \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\" returns successfully" Mar 14 00:13:59.336582 systemd[1]: run-netns-cni\x2d85ee96e9\x2d41c4\x2d23ae\x2d5268\x2dd1f97de208f5.mount: Deactivated successfully. Mar 14 00:13:59.338659 containerd[1466]: time="2026-03-14T00:13:59.338018031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5lvd2,Uid:c9b5e89e-6a6d-45ac-beaa-0696f3422320,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:59.359665 systemd-networkd[1362]: caliafc369447c9: Gained IPv6LL Mar 14 00:13:59.505639 systemd-networkd[1362]: cali4185d31dcd4: Link UP Mar 14 00:13:59.507694 systemd-networkd[1362]: cali4185d31dcd4: Gained carrier Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.412 [INFO][5081] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0 goldmane-9f7667bb8- calico-system c9b5e89e-6a6d-45ac-beaa-0696f3422320 1028 0 2026-03-14 00:13:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-233-218-137 goldmane-9f7667bb8-5lvd2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4185d31dcd4 [] [] }} ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.413 [INFO][5081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.458 [INFO][5098] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" HandleID="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.470 [INFO][5098] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" HandleID="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd410), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-218-137", "pod":"goldmane-9f7667bb8-5lvd2", "timestamp":"2026-03-14 00:13:59.458401627 +0000 UTC"}, Hostname:"172-233-218-137", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000300f20)} Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.470 [INFO][5098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.470 [INFO][5098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.470 [INFO][5098] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-218-137' Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.473 [INFO][5098] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.476 [INFO][5098] ipam/ipam.go 409: Looking up existing affinities for host host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.480 [INFO][5098] ipam/ipam.go 526: Trying affinity for 192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.482 [INFO][5098] ipam/ipam.go 160: Attempting to load block cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.484 [INFO][5098] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.484 [INFO][5098] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.487 [INFO][5098] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99 Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.491 [INFO][5098] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.497 [INFO][5098] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.126.72/26] block=192.168.126.64/26 handle="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.498 [INFO][5098] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.126.72/26] handle="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" host="172-233-218-137" Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.498 [INFO][5098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:59.530169 containerd[1466]: 2026-03-14 00:13:59.498 [INFO][5098] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.126.72/26] IPv6=[] ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" HandleID="k8s-pod-network.dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.531158 containerd[1466]: 2026-03-14 00:13:59.501 [INFO][5081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9b5e89e-6a6d-45ac-beaa-0696f3422320", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"", Pod:"goldmane-9f7667bb8-5lvd2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4185d31dcd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:59.531158 containerd[1466]: 2026-03-14 00:13:59.501 [INFO][5081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.72/32] ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.531158 containerd[1466]: 2026-03-14 00:13:59.501 [INFO][5081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4185d31dcd4 ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.531158 containerd[1466]: 2026-03-14 00:13:59.507 [INFO][5081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.531158 containerd[1466]: 2026-03-14 00:13:59.509 [INFO][5081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9b5e89e-6a6d-45ac-beaa-0696f3422320", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99", Pod:"goldmane-9f7667bb8-5lvd2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4185d31dcd4", MAC:"7e:d4:01:8b:ed:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:59.531158 containerd[1466]: 2026-03-14 00:13:59.519 [INFO][5081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99" Namespace="calico-system" Pod="goldmane-9f7667bb8-5lvd2" WorkloadEndpoint="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:13:59.553234 kubelet[2530]: E0314 00:13:59.552725 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:13:59.614694 containerd[1466]: time="2026-03-14T00:13:59.614601683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:59.614694 containerd[1466]: time="2026-03-14T00:13:59.614657004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:59.614694 containerd[1466]: time="2026-03-14T00:13:59.614670965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:59.615418 containerd[1466]: time="2026-03-14T00:13:59.614739326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:59.661840 containerd[1466]: time="2026-03-14T00:13:59.661806542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:59.663047 containerd[1466]: time="2026-03-14T00:13:59.662685537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:13:59.662833 systemd[1]: Started cri-containerd-dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99.scope - libcontainer container dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99. Mar 14 00:13:59.665705 containerd[1466]: time="2026-03-14T00:13:59.665056006Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:59.670297 containerd[1466]: time="2026-03-14T00:13:59.669052942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:59.670297 containerd[1466]: time="2026-03-14T00:13:59.669955927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.685747696s" Mar 14 00:13:59.670297 containerd[1466]: time="2026-03-14T00:13:59.669978437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:13:59.672748 containerd[1466]: time="2026-03-14T00:13:59.672700622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:13:59.674481 containerd[1466]: time="2026-03-14T00:13:59.674422381Z" level=info msg="CreateContainer within sandbox \"5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:13:59.699095 containerd[1466]: time="2026-03-14T00:13:59.699061417Z" level=info msg="CreateContainer within sandbox \"5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a499f6f6a1851b603d20328dfd3825003a7cee005a2b70e5a260ea51a0a7ebdd\"" Mar 14 00:13:59.701761 containerd[1466]: time="2026-03-14T00:13:59.701726801Z" level=info msg="StartContainer for \"a499f6f6a1851b603d20328dfd3825003a7cee005a2b70e5a260ea51a0a7ebdd\"" Mar 14 00:13:59.753784 systemd[1]: Started cri-containerd-a499f6f6a1851b603d20328dfd3825003a7cee005a2b70e5a260ea51a0a7ebdd.scope - libcontainer container a499f6f6a1851b603d20328dfd3825003a7cee005a2b70e5a260ea51a0a7ebdd. Mar 14 00:13:59.770688 containerd[1466]: time="2026-03-14T00:13:59.769530979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5lvd2,Uid:c9b5e89e-6a6d-45ac-beaa-0696f3422320,Namespace:calico-system,Attempt:1,} returns sandbox id \"dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99\"" Mar 14 00:13:59.811948 containerd[1466]: time="2026-03-14T00:13:59.811912119Z" level=info msg="StartContainer for \"a499f6f6a1851b603d20328dfd3825003a7cee005a2b70e5a260ea51a0a7ebdd\" returns successfully" Mar 14 00:13:59.848620 containerd[1466]: time="2026-03-14T00:13:59.848554103Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:59.849069 containerd[1466]: time="2026-03-14T00:13:59.849031091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:13:59.851256 containerd[1466]: time="2026-03-14T00:13:59.851168017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 178.072248ms" Mar 14 00:13:59.851256 containerd[1466]: time="2026-03-14T00:13:59.851192957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:13:59.853138 containerd[1466]: time="2026-03-14T00:13:59.852574679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:13:59.856625 containerd[1466]: time="2026-03-14T00:13:59.856600566Z" level=info msg="CreateContainer within sandbox \"07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:13:59.870633 containerd[1466]: time="2026-03-14T00:13:59.870606097Z" level=info msg="CreateContainer within sandbox \"07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"78bb066ea685da762749ae82cb23604598ba209084b4e6b69693ebcbfc7a9de6\"" Mar 14 00:13:59.872884 containerd[1466]: time="2026-03-14T00:13:59.871172567Z" level=info msg="StartContainer for \"78bb066ea685da762749ae82cb23604598ba209084b4e6b69693ebcbfc7a9de6\"" Mar 14 00:13:59.907921 systemd[1]: Started cri-containerd-78bb066ea685da762749ae82cb23604598ba209084b4e6b69693ebcbfc7a9de6.scope - libcontainer container 78bb066ea685da762749ae82cb23604598ba209084b4e6b69693ebcbfc7a9de6. Mar 14 00:13:59.933750 systemd-networkd[1362]: cali6a42adcbe45: Gained IPv6LL Mar 14 00:13:59.962017 containerd[1466]: time="2026-03-14T00:13:59.961988515Z" level=info msg="StartContainer for \"78bb066ea685da762749ae82cb23604598ba209084b4e6b69693ebcbfc7a9de6\" returns successfully" Mar 14 00:14:00.586080 kubelet[2530]: I0314 00:14:00.585730 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6dcdbf46fb-8h4g2" podStartSLOduration=25.77420974 podStartE2EDuration="27.585718325s" podCreationTimestamp="2026-03-14 00:13:33 +0000 UTC" firstStartedPulling="2026-03-14 00:13:58.040944943 +0000 UTC m=+40.916867274" lastFinishedPulling="2026-03-14 00:13:59.852453528 +0000 UTC m=+42.728375859" observedRunningTime="2026-03-14 00:14:00.57247049 +0000 UTC m=+43.448392831" watchObservedRunningTime="2026-03-14 00:14:00.585718325 +0000 UTC m=+43.461640666" Mar 14 00:14:01.021868 systemd-networkd[1362]: cali4185d31dcd4: Gained IPv6LL Mar 14 00:14:01.562477 kubelet[2530]: I0314 00:14:01.562253 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:01.563750 kubelet[2530]: I0314 00:14:01.562715 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:01.985639 containerd[1466]: time="2026-03-14T00:14:01.985367816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:01.986533 containerd[1466]: time="2026-03-14T00:14:01.986099806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:14:01.987034 containerd[1466]: time="2026-03-14T00:14:01.986994019Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:01.988864 containerd[1466]: time="2026-03-14T00:14:01.988836625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:01.989791 containerd[1466]: time="2026-03-14T00:14:01.989688757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.137090757s" Mar 14 00:14:01.989791 containerd[1466]: time="2026-03-14T00:14:01.989715928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:14:01.991324 containerd[1466]: time="2026-03-14T00:14:01.990968676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:14:02.002598 containerd[1466]: time="2026-03-14T00:14:02.002394059Z" level=info msg="CreateContainer within sandbox \"096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:14:02.011035 containerd[1466]: time="2026-03-14T00:14:02.010910873Z" level=info msg="CreateContainer within sandbox \"096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6\"" Mar 14 00:14:02.018453 containerd[1466]: time="2026-03-14T00:14:02.018426824Z" level=info msg="StartContainer for \"112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6\"" Mar 14 00:14:02.058706 systemd[1]: Started cri-containerd-112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6.scope - libcontainer container 112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6. Mar 14 00:14:02.113528 containerd[1466]: time="2026-03-14T00:14:02.113469269Z" level=info msg="StartContainer for \"112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6\" returns successfully" Mar 14 00:14:02.338714 kubelet[2530]: I0314 00:14:02.338674 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:02.427590 kubelet[2530]: I0314 00:14:02.426448 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6dcdbf46fb-kgmhm" podStartSLOduration=27.738735928 podStartE2EDuration="29.426437448s" podCreationTimestamp="2026-03-14 00:13:33 +0000 UTC" firstStartedPulling="2026-03-14 00:13:57.983298574 +0000 UTC m=+40.859220915" lastFinishedPulling="2026-03-14 00:13:59.671000104 +0000 UTC m=+42.546922435" observedRunningTime="2026-03-14 00:14:00.587278109 +0000 UTC m=+43.463200450" watchObservedRunningTime="2026-03-14 00:14:02.426437448 +0000 UTC m=+45.302359779" Mar 14 00:14:02.582391 kubelet[2530]: I0314 00:14:02.582134 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b54b945f5-vqsjc" podStartSLOduration=25.234731227 podStartE2EDuration="28.582120337s" podCreationTimestamp="2026-03-14 00:13:34 +0000 UTC" firstStartedPulling="2026-03-14 00:13:58.643069759 +0000 UTC m=+41.518992090" lastFinishedPulling="2026-03-14 00:14:01.990458869 +0000 UTC m=+44.866381200" observedRunningTime="2026-03-14 00:14:02.581812002 +0000 UTC m=+45.457734333" watchObservedRunningTime="2026-03-14 00:14:02.582120337 +0000 UTC m=+45.458042668" Mar 14 00:14:03.003119 systemd[1]: run-containerd-runc-k8s.io-112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6-runc.85Fi6R.mount: Deactivated successfully. Mar 14 00:14:03.072378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105826078.mount: Deactivated successfully. Mar 14 00:14:03.420395 containerd[1466]: time="2026-03-14T00:14:03.419938350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:03.421462 containerd[1466]: time="2026-03-14T00:14:03.420893872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:14:03.421462 containerd[1466]: time="2026-03-14T00:14:03.421083405Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:03.423022 containerd[1466]: time="2026-03-14T00:14:03.422996829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:03.423799 containerd[1466]: time="2026-03-14T00:14:03.423776818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.432786472s" Mar 14 00:14:03.423897 containerd[1466]: time="2026-03-14T00:14:03.423858229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:14:03.430818 containerd[1466]: time="2026-03-14T00:14:03.430709415Z" level=info msg="CreateContainer within sandbox \"dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:14:03.443041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45039970.mount: Deactivated successfully. Mar 14 00:14:03.448933 containerd[1466]: time="2026-03-14T00:14:03.448911183Z" level=info msg="CreateContainer within sandbox \"dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2f2f7b07d6d243ed46300210ef6827bad931cbc6eba5d27043136cda1996ce2d\"" Mar 14 00:14:03.449399 containerd[1466]: time="2026-03-14T00:14:03.449375529Z" level=info msg="StartContainer for \"2f2f7b07d6d243ed46300210ef6827bad931cbc6eba5d27043136cda1996ce2d\"" Mar 14 00:14:03.491688 systemd[1]: Started cri-containerd-2f2f7b07d6d243ed46300210ef6827bad931cbc6eba5d27043136cda1996ce2d.scope - libcontainer container 2f2f7b07d6d243ed46300210ef6827bad931cbc6eba5d27043136cda1996ce2d. Mar 14 00:14:03.532492 containerd[1466]: time="2026-03-14T00:14:03.532440459Z" level=info msg="StartContainer for \"2f2f7b07d6d243ed46300210ef6827bad931cbc6eba5d27043136cda1996ce2d\" returns successfully" Mar 14 00:14:03.571503 kubelet[2530]: I0314 00:14:03.571482 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:03.584848 kubelet[2530]: I0314 00:14:03.584267 2530 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-5lvd2" podStartSLOduration=26.928525881 podStartE2EDuration="30.584255057s" podCreationTimestamp="2026-03-14 00:13:33 +0000 UTC" firstStartedPulling="2026-03-14 00:13:59.772188434 +0000 UTC m=+42.648110765" lastFinishedPulling="2026-03-14 00:14:03.42791761 +0000 UTC m=+46.303839941" observedRunningTime="2026-03-14 00:14:03.582531415 +0000 UTC m=+46.458453756" watchObservedRunningTime="2026-03-14 00:14:03.584255057 +0000 UTC m=+46.460177388" Mar 14 00:14:12.951452 kubelet[2530]: I0314 00:14:12.951042 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:12.974974 systemd[1]: run-containerd-runc-k8s.io-112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6-runc.3H8gwe.mount: Deactivated successfully. Mar 14 00:14:17.208392 containerd[1466]: time="2026-03-14T00:14:17.208343909Z" level=info msg="StopPodSandbox for \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\"" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.241 [WARNING][5543] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"890dbf4b-5710-47dc-9a3c-1eca2584bc93", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748", Pod:"calico-apiserver-6dcdbf46fb-kgmhm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliafc369447c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.241 [INFO][5543] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.241 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" iface="eth0" netns="" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.241 [INFO][5543] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.241 [INFO][5543] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.262 [INFO][5553] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.262 [INFO][5553] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.262 [INFO][5553] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.267 [WARNING][5553] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.267 [INFO][5553] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.268 [INFO][5553] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.273133 containerd[1466]: 2026-03-14 00:14:17.270 [INFO][5543] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.274297 containerd[1466]: time="2026-03-14T00:14:17.273163839Z" level=info msg="TearDown network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\" successfully" Mar 14 00:14:17.274297 containerd[1466]: time="2026-03-14T00:14:17.273200389Z" level=info msg="StopPodSandbox for \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\" returns successfully" Mar 14 00:14:17.274297 containerd[1466]: time="2026-03-14T00:14:17.273901022Z" level=info msg="RemovePodSandbox for \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\"" Mar 14 00:14:17.274297 containerd[1466]: time="2026-03-14T00:14:17.273923742Z" level=info msg="Forcibly stopping sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\"" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.308 [WARNING][5568] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"890dbf4b-5710-47dc-9a3c-1eca2584bc93", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"5b0114da04ad598d851a27e9910ff15a2a8e44b8c53d69e12e2c082b31c71748", Pod:"calico-apiserver-6dcdbf46fb-kgmhm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliafc369447c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.308 [INFO][5568] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.309 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" iface="eth0" netns="" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.309 [INFO][5568] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.309 [INFO][5568] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.342 [INFO][5575] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.342 [INFO][5575] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.342 [INFO][5575] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.348 [WARNING][5575] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.348 [INFO][5575] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" HandleID="k8s-pod-network.4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--kgmhm-eth0" Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.349 [INFO][5575] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.354246 containerd[1466]: 2026-03-14 00:14:17.352 [INFO][5568] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4" Mar 14 00:14:17.355642 containerd[1466]: time="2026-03-14T00:14:17.354929665Z" level=info msg="TearDown network for sandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\" successfully" Mar 14 00:14:17.359334 containerd[1466]: time="2026-03-14T00:14:17.359309795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:17.359602 containerd[1466]: time="2026-03-14T00:14:17.359364355Z" level=info msg="RemovePodSandbox \"4687667c5b7c5469325307347e2aa0c22258aa147de6cdecc1eb49b51886ada4\" returns successfully" Mar 14 00:14:17.360004 containerd[1466]: time="2026-03-14T00:14:17.359984278Z" level=info msg="StopPodSandbox for \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\"" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.390 [WARNING][5589] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" WorkloadEndpoint="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.391 [INFO][5589] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.391 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" iface="eth0" netns="" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.391 [INFO][5589] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.391 [INFO][5589] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.411 [INFO][5596] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.411 [INFO][5596] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.412 [INFO][5596] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.417 [WARNING][5596] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.417 [INFO][5596] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.418 [INFO][5596] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.424070 containerd[1466]: 2026-03-14 00:14:17.420 [INFO][5589] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.424070 containerd[1466]: time="2026-03-14T00:14:17.423956534Z" level=info msg="TearDown network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\" successfully" Mar 14 00:14:17.424070 containerd[1466]: time="2026-03-14T00:14:17.423977184Z" level=info msg="StopPodSandbox for \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\" returns successfully" Mar 14 00:14:17.425005 containerd[1466]: time="2026-03-14T00:14:17.424977869Z" level=info msg="RemovePodSandbox for \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\"" Mar 14 00:14:17.425005 containerd[1466]: time="2026-03-14T00:14:17.425005269Z" level=info msg="Forcibly stopping sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\"" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.456 [WARNING][5610] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" WorkloadEndpoint="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.456 [INFO][5610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.456 [INFO][5610] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" iface="eth0" netns="" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.456 [INFO][5610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.456 [INFO][5610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.479 [INFO][5617] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.479 [INFO][5617] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.479 [INFO][5617] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.485 [WARNING][5617] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.485 [INFO][5617] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" HandleID="k8s-pod-network.7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Workload="172--233--218--137-k8s-whisker--7bc96cdf7c--gtz6k-eth0" Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.486 [INFO][5617] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.491957 containerd[1466]: 2026-03-14 00:14:17.489 [INFO][5610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013" Mar 14 00:14:17.491957 containerd[1466]: time="2026-03-14T00:14:17.491920598Z" level=info msg="TearDown network for sandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\" successfully" Mar 14 00:14:17.496096 containerd[1466]: time="2026-03-14T00:14:17.496065797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:17.496193 containerd[1466]: time="2026-03-14T00:14:17.496120687Z" level=info msg="RemovePodSandbox \"7dae034218005bbb0133a958fc56724be354e5c8e4d04537798112c307f33013\" returns successfully" Mar 14 00:14:17.496749 containerd[1466]: time="2026-03-14T00:14:17.496488769Z" level=info msg="StopPodSandbox for \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\"" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.529 [WARNING][5631] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0", GenerateName:"calico-kube-controllers-7b54b945f5-", Namespace:"calico-system", SelfLink:"", UID:"5071b63d-65a6-4318-a3dc-58009573e7ce", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b54b945f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4", Pod:"calico-kube-controllers-7b54b945f5-vqsjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a42adcbe45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.529 [INFO][5631] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.529 [INFO][5631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" iface="eth0" netns="" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.529 [INFO][5631] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.529 [INFO][5631] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.550 [INFO][5639] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.550 [INFO][5639] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.550 [INFO][5639] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.557 [WARNING][5639] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.557 [INFO][5639] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.559 [INFO][5639] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.563306 containerd[1466]: 2026-03-14 00:14:17.561 [INFO][5631] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.563888 containerd[1466]: time="2026-03-14T00:14:17.563339228Z" level=info msg="TearDown network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\" successfully" Mar 14 00:14:17.563888 containerd[1466]: time="2026-03-14T00:14:17.563362758Z" level=info msg="StopPodSandbox for \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\" returns successfully" Mar 14 00:14:17.564265 containerd[1466]: time="2026-03-14T00:14:17.564245962Z" level=info msg="RemovePodSandbox for \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\"" Mar 14 00:14:17.564328 containerd[1466]: time="2026-03-14T00:14:17.564302372Z" level=info msg="Forcibly stopping sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\"" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.595 [WARNING][5654] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0", GenerateName:"calico-kube-controllers-7b54b945f5-", Namespace:"calico-system", SelfLink:"", UID:"5071b63d-65a6-4318-a3dc-58009573e7ce", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b54b945f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"096b2b6cb7d6a86b47ed205d051b3a50788fbed79640929d77accd6dbe7267a4", Pod:"calico-kube-controllers-7b54b945f5-vqsjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a42adcbe45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.596 [INFO][5654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.596 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" iface="eth0" netns="" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.596 [INFO][5654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.596 [INFO][5654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.619 [INFO][5661] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.619 [INFO][5661] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.619 [INFO][5661] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.631 [WARNING][5661] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.631 [INFO][5661] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" HandleID="k8s-pod-network.c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Workload="172--233--218--137-k8s-calico--kube--controllers--7b54b945f5--vqsjc-eth0" Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.632 [INFO][5661] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.638712 containerd[1466]: 2026-03-14 00:14:17.635 [INFO][5654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15" Mar 14 00:14:17.638712 containerd[1466]: time="2026-03-14T00:14:17.637547839Z" level=info msg="TearDown network for sandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\" successfully" Mar 14 00:14:17.640756 containerd[1466]: time="2026-03-14T00:14:17.640731014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:17.640851 containerd[1466]: time="2026-03-14T00:14:17.640776064Z" level=info msg="RemovePodSandbox \"c01eefe003d0f71fb42474362548f9ccaff6e9d0fe7f658e5856ffac99bcfe15\" returns successfully" Mar 14 00:14:17.641084 containerd[1466]: time="2026-03-14T00:14:17.641066406Z" level=info msg="StopPodSandbox for \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\"" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.670 [WARNING][5675] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c418ad6-e69b-4b1f-bded-2e2a531bde69", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890", Pod:"coredns-7d764666f9-h2qg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa4ac6d2340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.670 [INFO][5675] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.670 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" iface="eth0" netns="" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.671 [INFO][5675] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.671 [INFO][5675] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.691 [INFO][5682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.691 [INFO][5682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.692 [INFO][5682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.698 [WARNING][5682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.698 [INFO][5682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.699 [INFO][5682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.704714 containerd[1466]: 2026-03-14 00:14:17.701 [INFO][5675] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.704714 containerd[1466]: time="2026-03-14T00:14:17.704580720Z" level=info msg="TearDown network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\" successfully" Mar 14 00:14:17.704714 containerd[1466]: time="2026-03-14T00:14:17.704600730Z" level=info msg="StopPodSandbox for \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\" returns successfully" Mar 14 00:14:17.706066 containerd[1466]: time="2026-03-14T00:14:17.705247253Z" level=info msg="RemovePodSandbox for \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\"" Mar 14 00:14:17.706066 containerd[1466]: time="2026-03-14T00:14:17.705271293Z" level=info msg="Forcibly stopping sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\"" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.747 [WARNING][5696] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c418ad6-e69b-4b1f-bded-2e2a531bde69", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"d89fda0c6af5c83aa8697bca99e89b5ac070a19d843bebaccace847732539890", Pod:"coredns-7d764666f9-h2qg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa4ac6d2340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.748 [INFO][5696] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.748 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" iface="eth0" netns="" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.748 [INFO][5696] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.748 [INFO][5696] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.769 [INFO][5703] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.769 [INFO][5703] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.769 [INFO][5703] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.775 [WARNING][5703] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.775 [INFO][5703] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" HandleID="k8s-pod-network.02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Workload="172--233--218--137-k8s-coredns--7d764666f9--h2qg5-eth0" Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.776 [INFO][5703] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.781126 containerd[1466]: 2026-03-14 00:14:17.779 [INFO][5696] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4" Mar 14 00:14:17.781507 containerd[1466]: time="2026-03-14T00:14:17.781163543Z" level=info msg="TearDown network for sandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\" successfully" Mar 14 00:14:17.784986 containerd[1466]: time="2026-03-14T00:14:17.784962730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:17.787600 containerd[1466]: time="2026-03-14T00:14:17.785313182Z" level=info msg="RemovePodSandbox \"02833a9ab25ac83a1b96a10c5ff7bf7af8744286257af38808d51279f5144fb4\" returns successfully" Mar 14 00:14:17.789207 containerd[1466]: time="2026-03-14T00:14:17.789148359Z" level=info msg="StopPodSandbox for \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\"" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.823 [WARNING][5717] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b627313e-4d3d-42f3-aad2-4c1df6199113", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3", Pod:"coredns-7d764666f9-pngzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5a5a0acaaf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.823 [INFO][5717] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.823 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" iface="eth0" netns="" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.823 [INFO][5717] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.823 [INFO][5717] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.844 [INFO][5724] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.844 [INFO][5724] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.844 [INFO][5724] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.849 [WARNING][5724] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.849 [INFO][5724] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.850 [INFO][5724] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.855146 containerd[1466]: 2026-03-14 00:14:17.853 [INFO][5717] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.855757 containerd[1466]: time="2026-03-14T00:14:17.855195804Z" level=info msg="TearDown network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\" successfully" Mar 14 00:14:17.855757 containerd[1466]: time="2026-03-14T00:14:17.855218685Z" level=info msg="StopPodSandbox for \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\" returns successfully" Mar 14 00:14:17.856196 containerd[1466]: time="2026-03-14T00:14:17.856129039Z" level=info msg="RemovePodSandbox for \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\"" Mar 14 00:14:17.856196 containerd[1466]: time="2026-03-14T00:14:17.856157939Z" level=info msg="Forcibly stopping sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\"" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.887 [WARNING][5738] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b627313e-4d3d-42f3-aad2-4c1df6199113", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"0640a4cb5b9ca824f9811f419152f8b44ea48c082f0b989086bf7f15a660eec3", Pod:"coredns-7d764666f9-pngzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic5a5a0acaaf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.887 [INFO][5738] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.887 [INFO][5738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" iface="eth0" netns="" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.887 [INFO][5738] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.887 [INFO][5738] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.909 [INFO][5745] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.909 [INFO][5745] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.909 [INFO][5745] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.915 [WARNING][5745] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.915 [INFO][5745] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" HandleID="k8s-pod-network.f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Workload="172--233--218--137-k8s-coredns--7d764666f9--pngzk-eth0" Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.916 [INFO][5745] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.921357 containerd[1466]: 2026-03-14 00:14:17.918 [INFO][5738] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2" Mar 14 00:14:17.921948 containerd[1466]: time="2026-03-14T00:14:17.921376661Z" level=info msg="TearDown network for sandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\" successfully" Mar 14 00:14:17.929211 containerd[1466]: time="2026-03-14T00:14:17.929171766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:17.929286 containerd[1466]: time="2026-03-14T00:14:17.929231516Z" level=info msg="RemovePodSandbox \"f0ccc28d2865039af577fc87f7f4fa840fc0cc5c9e1194373b65b64d745e7ee2\" returns successfully" Mar 14 00:14:17.929687 containerd[1466]: time="2026-03-14T00:14:17.929658107Z" level=info msg="StopPodSandbox for \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\"" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.961 [WARNING][5759] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9b5e89e-6a6d-45ac-beaa-0696f3422320", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99", Pod:"goldmane-9f7667bb8-5lvd2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4185d31dcd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.962 [INFO][5759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.962 [INFO][5759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" iface="eth0" netns="" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.962 [INFO][5759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.962 [INFO][5759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.983 [INFO][5766] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.983 [INFO][5766] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.983 [INFO][5766] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.988 [WARNING][5766] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.988 [INFO][5766] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.989 [INFO][5766] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:17.994097 containerd[1466]: 2026-03-14 00:14:17.991 [INFO][5759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:17.994737 containerd[1466]: time="2026-03-14T00:14:17.994147386Z" level=info msg="TearDown network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\" successfully" Mar 14 00:14:17.994737 containerd[1466]: time="2026-03-14T00:14:17.994172256Z" level=info msg="StopPodSandbox for \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\" returns successfully" Mar 14 00:14:17.995325 containerd[1466]: time="2026-03-14T00:14:17.995051010Z" level=info msg="RemovePodSandbox for \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\"" Mar 14 00:14:17.995325 containerd[1466]: time="2026-03-14T00:14:17.995078730Z" level=info msg="Forcibly stopping sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\"" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.026 [WARNING][5780] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9b5e89e-6a6d-45ac-beaa-0696f3422320", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"dabc691fe0b522fb9647175eb8a684ba81a133766102274a0dd5239772fa1f99", Pod:"goldmane-9f7667bb8-5lvd2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4185d31dcd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.026 [INFO][5780] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.026 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" iface="eth0" netns="" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.026 [INFO][5780] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.026 [INFO][5780] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.045 [INFO][5787] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.045 [INFO][5787] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.045 [INFO][5787] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.049 [WARNING][5787] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.049 [INFO][5787] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" HandleID="k8s-pod-network.79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Workload="172--233--218--137-k8s-goldmane--9f7667bb8--5lvd2-eth0" Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.051 [INFO][5787] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:18.055396 containerd[1466]: 2026-03-14 00:14:18.053 [INFO][5780] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c" Mar 14 00:14:18.056613 containerd[1466]: time="2026-03-14T00:14:18.055831793Z" level=info msg="TearDown network for sandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\" successfully" Mar 14 00:14:18.058945 containerd[1466]: time="2026-03-14T00:14:18.058917636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:18.059005 containerd[1466]: time="2026-03-14T00:14:18.058971156Z" level=info msg="RemovePodSandbox \"79ca0ca1236ce54c02d8f272c80dbfc1008ac4d92c2e8c8c464e477e0a22cc3c\" returns successfully" Mar 14 00:14:18.059370 containerd[1466]: time="2026-03-14T00:14:18.059351498Z" level=info msg="StopPodSandbox for \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\"" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.090 [WARNING][5802] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-csi--node--driver--mlvrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efc9842b-5041-4fb5-bc21-b23964d856d2", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc", Pod:"csi-node-driver-mlvrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43a6f8fa5ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.090 [INFO][5802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.090 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" iface="eth0" netns="" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.090 [INFO][5802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.090 [INFO][5802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.109 [INFO][5809] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.109 [INFO][5809] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.109 [INFO][5809] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.114 [WARNING][5809] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.114 [INFO][5809] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.115 [INFO][5809] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:18.120258 containerd[1466]: 2026-03-14 00:14:18.117 [INFO][5802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.120724 containerd[1466]: time="2026-03-14T00:14:18.120323580Z" level=info msg="TearDown network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\" successfully" Mar 14 00:14:18.120724 containerd[1466]: time="2026-03-14T00:14:18.120371211Z" level=info msg="StopPodSandbox for \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\" returns successfully" Mar 14 00:14:18.125344 containerd[1466]: time="2026-03-14T00:14:18.125321761Z" level=info msg="RemovePodSandbox for \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\"" Mar 14 00:14:18.125396 containerd[1466]: time="2026-03-14T00:14:18.125352211Z" level=info msg="Forcibly stopping sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\"" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.156 [WARNING][5823] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-csi--node--driver--mlvrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efc9842b-5041-4fb5-bc21-b23964d856d2", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"48612208b5b45ec794265f5ca3e0a6aa2bb3bae4b136c0c0c3e3c01f8a553dbc", Pod:"csi-node-driver-mlvrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43a6f8fa5ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.156 [INFO][5823] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.156 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" iface="eth0" netns="" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.156 [INFO][5823] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.156 [INFO][5823] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.177 [INFO][5831] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.177 [INFO][5831] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.177 [INFO][5831] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.183 [WARNING][5831] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.183 [INFO][5831] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" HandleID="k8s-pod-network.12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Workload="172--233--218--137-k8s-csi--node--driver--mlvrd-eth0" Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.184 [INFO][5831] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:18.188673 containerd[1466]: 2026-03-14 00:14:18.186 [INFO][5823] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e" Mar 14 00:14:18.189234 containerd[1466]: time="2026-03-14T00:14:18.188706282Z" level=info msg="TearDown network for sandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\" successfully" Mar 14 00:14:18.192192 containerd[1466]: time="2026-03-14T00:14:18.192144707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:18.192291 containerd[1466]: time="2026-03-14T00:14:18.192228818Z" level=info msg="RemovePodSandbox \"12339d94f7d062b524e3ba222a85539ce2601dd15f28557edd564b3048d7a06e\" returns successfully" Mar 14 00:14:18.192643 containerd[1466]: time="2026-03-14T00:14:18.192624409Z" level=info msg="StopPodSandbox for \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\"" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.223 [WARNING][5845] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"9494d07f-08cc-408c-af89-27df5fb41f1e", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2", Pod:"calico-apiserver-6dcdbf46fb-8h4g2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali17d4be7279a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.224 [INFO][5845] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.224 [INFO][5845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" iface="eth0" netns="" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.224 [INFO][5845] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.224 [INFO][5845] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.248 [INFO][5853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.248 [INFO][5853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.248 [INFO][5853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.253 [WARNING][5853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.253 [INFO][5853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.254 [INFO][5853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:18.259050 containerd[1466]: 2026-03-14 00:14:18.256 [INFO][5845] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.259708 containerd[1466]: time="2026-03-14T00:14:18.259625006Z" level=info msg="TearDown network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\" successfully" Mar 14 00:14:18.259708 containerd[1466]: time="2026-03-14T00:14:18.259649636Z" level=info msg="StopPodSandbox for \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\" returns successfully" Mar 14 00:14:18.260202 containerd[1466]: time="2026-03-14T00:14:18.260175438Z" level=info msg="RemovePodSandbox for \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\"" Mar 14 00:14:18.260202 containerd[1466]: time="2026-03-14T00:14:18.260200649Z" level=info msg="Forcibly stopping sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\"" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.292 [WARNING][5867] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0", GenerateName:"calico-apiserver-6dcdbf46fb-", Namespace:"calico-system", SelfLink:"", UID:"9494d07f-08cc-408c-af89-27df5fb41f1e", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dcdbf46fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-218-137", ContainerID:"07e21a68aa499dc30ed519a599ae7f770c83d4637a36b3ff2651224c52a368a2", Pod:"calico-apiserver-6dcdbf46fb-8h4g2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali17d4be7279a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.292 [INFO][5867] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.292 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" iface="eth0" netns="" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.292 [INFO][5867] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.292 [INFO][5867] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.314 [INFO][5875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.315 [INFO][5875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.315 [INFO][5875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.319 [WARNING][5875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.319 [INFO][5875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" HandleID="k8s-pod-network.afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Workload="172--233--218--137-k8s-calico--apiserver--6dcdbf46fb--8h4g2-eth0" Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.321 [INFO][5875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:18.327737 containerd[1466]: 2026-03-14 00:14:18.323 [INFO][5867] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790" Mar 14 00:14:18.327737 containerd[1466]: time="2026-03-14T00:14:18.326871964Z" level=info msg="TearDown network for sandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\" successfully" Mar 14 00:14:18.331742 containerd[1466]: time="2026-03-14T00:14:18.331709604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:18.331814 containerd[1466]: time="2026-03-14T00:14:18.331799564Z" level=info msg="RemovePodSandbox \"afa1e6b5d073478c5d275c01c7a306e350e56b13ab7b9cccb0d4cb4f286b4790\" returns successfully" Mar 14 00:14:19.483623 kubelet[2530]: I0314 00:14:19.482527 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:34.610959 systemd[1]: run-containerd-runc-k8s.io-2f2f7b07d6d243ed46300210ef6827bad931cbc6eba5d27043136cda1996ce2d-runc.9tih2U.mount: Deactivated successfully. Mar 14 00:14:37.229039 kubelet[2530]: E0314 00:14:37.228052 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:14:37.267234 kubelet[2530]: I0314 00:14:37.266318 2530 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:39.225621 kubelet[2530]: E0314 00:14:39.224852 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:14:48.224725 kubelet[2530]: E0314 00:14:48.224692 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:14:50.224499 kubelet[2530]: E0314 00:14:50.224469 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:15:00.224494 kubelet[2530]: E0314 00:15:00.224444 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:15:04.593203 systemd[1]: run-containerd-runc-k8s.io-2f2f7b07d6d243ed46300210ef6827bad931cbc6eba5d27043136cda1996ce2d-runc.dTpB1F.mount: Deactivated successfully. Mar 14 00:15:12.224319 kubelet[2530]: E0314 00:15:12.224286 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:15:13.020020 systemd[1]: run-containerd-runc-k8s.io-112b2ec03f75499d9349745f7a15cf2c68fbcc19c20228003e845c82386adff6-runc.Ft2488.mount: Deactivated successfully. Mar 14 00:15:13.667469 systemd[1]: Started sshd@7-172.233.218.137:22-4.153.228.146:34250.service - OpenSSH per-connection server daemon (4.153.228.146:34250). Mar 14 00:15:13.836285 sshd[6091]: Accepted publickey for core from 4.153.228.146 port 34250 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:13.838547 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:13.844985 systemd-logind[1444]: New session 8 of user core. Mar 14 00:15:13.851685 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:15:14.062025 sshd[6091]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:14.067180 systemd[1]: sshd@7-172.233.218.137:22-4.153.228.146:34250.service: Deactivated successfully. Mar 14 00:15:14.070700 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:15:14.071534 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:15:14.072959 systemd-logind[1444]: Removed session 8. Mar 14 00:15:18.224865 kubelet[2530]: E0314 00:15:18.224377 2530 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 14 00:15:19.107136 systemd[1]: Started sshd@8-172.233.218.137:22-4.153.228.146:54342.service - OpenSSH per-connection server daemon (4.153.228.146:54342). Mar 14 00:15:19.282901 sshd[6107]: Accepted publickey for core from 4.153.228.146 port 54342 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:19.285039 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:19.290829 systemd-logind[1444]: New session 9 of user core. Mar 14 00:15:19.299811 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:15:19.504862 sshd[6107]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:19.509445 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:15:19.510750 systemd[1]: sshd@8-172.233.218.137:22-4.153.228.146:54342.service: Deactivated successfully. Mar 14 00:15:19.513167 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:15:19.514453 systemd-logind[1444]: Removed session 9. Mar 14 00:15:24.547890 systemd[1]: Started sshd@9-172.233.218.137:22-4.153.228.146:54356.service - OpenSSH per-connection server daemon (4.153.228.146:54356). Mar 14 00:15:24.703802 sshd[6162]: Accepted publickey for core from 4.153.228.146 port 54356 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:24.705994 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:24.711925 systemd-logind[1444]: New session 10 of user core. Mar 14 00:15:24.720725 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:15:24.898622 sshd[6162]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:24.904024 systemd[1]: sshd@9-172.233.218.137:22-4.153.228.146:54356.service: Deactivated successfully. Mar 14 00:15:24.907436 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:15:24.908158 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:15:24.909295 systemd-logind[1444]: Removed session 10. Mar 14 00:15:29.937713 systemd[1]: Started sshd@10-172.233.218.137:22-4.153.228.146:41224.service - OpenSSH per-connection server daemon (4.153.228.146:41224). Mar 14 00:15:30.126795 sshd[6200]: Accepted publickey for core from 4.153.228.146 port 41224 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:30.128387 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:30.132960 systemd-logind[1444]: New session 11 of user core. Mar 14 00:15:30.141686 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:15:30.345950 sshd[6200]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:30.349271 systemd[1]: sshd@10-172.233.218.137:22-4.153.228.146:41224.service: Deactivated successfully. Mar 14 00:15:30.351547 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:15:30.353427 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:15:30.354929 systemd-logind[1444]: Removed session 11. Mar 14 00:15:30.374524 systemd[1]: Started sshd@11-172.233.218.137:22-4.153.228.146:41240.service - OpenSSH per-connection server daemon (4.153.228.146:41240). Mar 14 00:15:30.529992 sshd[6214]: Accepted publickey for core from 4.153.228.146 port 41240 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:30.530934 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:30.535784 systemd-logind[1444]: New session 12 of user core. Mar 14 00:15:30.544700 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:15:30.747731 sshd[6214]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:30.752495 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:15:30.753495 systemd[1]: sshd@11-172.233.218.137:22-4.153.228.146:41240.service: Deactivated successfully. Mar 14 00:15:30.756258 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:15:30.758318 systemd-logind[1444]: Removed session 12. Mar 14 00:15:30.780724 systemd[1]: Started sshd@12-172.233.218.137:22-4.153.228.146:41246.service - OpenSSH per-connection server daemon (4.153.228.146:41246). Mar 14 00:15:30.941218 sshd[6225]: Accepted publickey for core from 4.153.228.146 port 41246 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:30.942745 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:30.947425 systemd-logind[1444]: New session 13 of user core. Mar 14 00:15:30.952824 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:15:31.161706 sshd[6225]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:31.165489 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:15:31.169992 systemd[1]: sshd@12-172.233.218.137:22-4.153.228.146:41246.service: Deactivated successfully. Mar 14 00:15:31.175962 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:15:31.177793 systemd-logind[1444]: Removed session 13. Mar 14 00:15:36.194172 systemd[1]: Started sshd@13-172.233.218.137:22-4.153.228.146:41254.service - OpenSSH per-connection server daemon (4.153.228.146:41254). Mar 14 00:15:36.345449 sshd[6281]: Accepted publickey for core from 4.153.228.146 port 41254 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:36.346249 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:36.351146 systemd-logind[1444]: New session 14 of user core. Mar 14 00:15:36.356755 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:15:36.554352 sshd[6281]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:36.559131 systemd[1]: sshd@13-172.233.218.137:22-4.153.228.146:41254.service: Deactivated successfully. Mar 14 00:15:36.562839 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:15:36.564446 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:15:36.566338 systemd-logind[1444]: Removed session 14. Mar 14 00:15:36.590863 systemd[1]: Started sshd@14-172.233.218.137:22-4.153.228.146:41264.service - OpenSSH per-connection server daemon (4.153.228.146:41264). Mar 14 00:15:36.772128 sshd[6294]: Accepted publickey for core from 4.153.228.146 port 41264 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:36.774301 sshd[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:36.779650 systemd-logind[1444]: New session 15 of user core. Mar 14 00:15:36.789207 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:15:37.328711 sshd[6294]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:37.336025 systemd[1]: sshd@14-172.233.218.137:22-4.153.228.146:41264.service: Deactivated successfully. Mar 14 00:15:37.339135 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:15:37.339891 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:15:37.341255 systemd-logind[1444]: Removed session 15. Mar 14 00:15:37.362718 systemd[1]: Started sshd@15-172.233.218.137:22-4.153.228.146:41266.service - OpenSSH per-connection server daemon (4.153.228.146:41266). Mar 14 00:15:37.539258 sshd[6306]: Accepted publickey for core from 4.153.228.146 port 41266 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:37.541011 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:37.547468 systemd-logind[1444]: New session 16 of user core. Mar 14 00:15:37.551681 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:15:38.272551 sshd[6306]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:38.277752 systemd[1]: sshd@15-172.233.218.137:22-4.153.228.146:41266.service: Deactivated successfully. Mar 14 00:15:38.282475 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:15:38.284648 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:15:38.286338 systemd-logind[1444]: Removed session 16. Mar 14 00:15:38.311783 systemd[1]: Started sshd@16-172.233.218.137:22-4.153.228.146:41268.service - OpenSSH per-connection server daemon (4.153.228.146:41268). Mar 14 00:15:38.497985 sshd[6327]: Accepted publickey for core from 4.153.228.146 port 41268 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:38.498654 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:38.503377 systemd-logind[1444]: New session 17 of user core. Mar 14 00:15:38.510720 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:15:38.835664 sshd[6327]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:38.840656 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:15:38.842289 systemd[1]: sshd@16-172.233.218.137:22-4.153.228.146:41268.service: Deactivated successfully. Mar 14 00:15:38.845984 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:15:38.847197 systemd-logind[1444]: Removed session 17. Mar 14 00:15:38.868844 systemd[1]: Started sshd@17-172.233.218.137:22-4.153.228.146:50730.service - OpenSSH per-connection server daemon (4.153.228.146:50730). Mar 14 00:15:39.013912 sshd[6341]: Accepted publickey for core from 4.153.228.146 port 50730 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:39.016185 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:39.021620 systemd-logind[1444]: New session 18 of user core. Mar 14 00:15:39.027697 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:15:39.222633 sshd[6341]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:39.229203 systemd[1]: sshd@17-172.233.218.137:22-4.153.228.146:50730.service: Deactivated successfully. Mar 14 00:15:39.232255 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:15:39.234279 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:15:39.235948 systemd-logind[1444]: Removed session 18. Mar 14 00:15:44.265759 systemd[1]: Started sshd@18-172.233.218.137:22-4.153.228.146:50744.service - OpenSSH per-connection server daemon (4.153.228.146:50744). Mar 14 00:15:44.422994 sshd[6376]: Accepted publickey for core from 4.153.228.146 port 50744 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:44.423845 sshd[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:44.429486 systemd-logind[1444]: New session 19 of user core. Mar 14 00:15:44.433698 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:15:44.619822 sshd[6376]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:44.626124 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:15:44.627288 systemd[1]: sshd@18-172.233.218.137:22-4.153.228.146:50744.service: Deactivated successfully. Mar 14 00:15:44.629812 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:15:44.632834 systemd-logind[1444]: Removed session 19. Mar 14 00:15:49.660109 systemd[1]: Started sshd@19-172.233.218.137:22-4.153.228.146:35186.service - OpenSSH per-connection server daemon (4.153.228.146:35186). Mar 14 00:15:49.851587 sshd[6389]: Accepted publickey for core from 4.153.228.146 port 35186 ssh2: RSA SHA256:jjworuAdCNaKOK8GYySNem9C2IpwbYUuS++C3Oprvm4 Mar 14 00:15:49.853214 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:49.858395 systemd-logind[1444]: New session 20 of user core. Mar 14 00:15:49.860672 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:15:50.072853 sshd[6389]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:50.080046 systemd[1]: sshd@19-172.233.218.137:22-4.153.228.146:35186.service: Deactivated successfully. Mar 14 00:15:50.080437 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:15:50.084242 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:15:50.085454 systemd-logind[1444]: Removed session 20.