Mar 7 01:06:14.954054 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:06:14.954075 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:06:14.954084 kernel: BIOS-provided physical RAM map: Mar 7 01:06:14.954090 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:06:14.954095 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:06:14.954103 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:06:14.954110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:06:14.954116 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:06:14.954122 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:06:14.954127 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:06:14.954133 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:06:14.954139 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:06:14.954145 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:06:14.954153 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:06:14.954160 kernel: NX (Execute Disable) protection: active Mar 7 01:06:14.954166 kernel: APIC: Static calls initialized Mar 7 01:06:14.954172 kernel: SMBIOS 2.8 present. Mar 7 01:06:14.954179 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:06:14.954185 kernel: Hypervisor detected: KVM Mar 7 01:06:14.954193 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:06:14.954199 kernel: kvm-clock: using sched offset of 5827120385 cycles Mar 7 01:06:14.954206 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:06:14.954212 kernel: tsc: Detected 1999.996 MHz processor Mar 7 01:06:14.954219 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:06:14.954226 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:06:14.954568 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:06:14.954575 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:06:14.954582 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:06:14.954592 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:06:14.954598 kernel: Using GB pages for direct mapping Mar 7 01:06:14.954605 kernel: ACPI: Early table checksum verification disabled Mar 7 01:06:14.954611 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:06:14.954617 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954624 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954630 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954637 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:06:14.954643 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954652 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954658 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954665 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954675 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:06:14.954682 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:06:14.954689 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:06:14.954698 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:06:14.954705 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:06:14.954712 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:06:14.954719 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:06:14.954725 kernel: No NUMA configuration found Mar 7 01:06:14.954732 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:06:14.954739 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 7 01:06:14.954746 kernel: Zone ranges: Mar 7 01:06:14.954755 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:06:14.954762 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:06:14.954769 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:06:14.954775 kernel: Movable zone start for each node Mar 7 01:06:14.954782 kernel: Early memory node ranges Mar 7 01:06:14.954789 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:06:14.954795 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:06:14.954802 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:06:14.954809 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:06:14.954815 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:06:14.954825 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:06:14.954832 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:06:14.954838 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:06:14.954845 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:06:14.954852 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:06:14.954859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:06:14.954865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:06:14.954872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:06:14.954879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:06:14.954888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:06:14.954895 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:06:14.954902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:06:14.954908 kernel: TSC deadline timer available Mar 7 01:06:14.954915 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:06:14.954922 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:06:14.954928 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:06:14.954935 kernel: kvm-guest: setup PV sched yield Mar 7 01:06:14.954942 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:06:14.954951 kernel: Booting paravirtualized kernel on KVM Mar 7 01:06:14.954958 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:06:14.954965 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:06:14.954971 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:06:14.954978 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:06:14.954985 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:06:14.954991 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:06:14.954998 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:06:14.955006 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:06:14.955015 kernel: random: crng init done Mar 7 01:06:14.955022 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:06:14.955029 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:06:14.955035 kernel: Fallback order for Node 0: 0 Mar 7 01:06:14.955042 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:06:14.955049 kernel: Policy zone: Normal Mar 7 01:06:14.955055 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:06:14.955062 kernel: software IO TLB: area num 2. Mar 7 01:06:14.955072 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Mar 7 01:06:14.955078 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:06:14.955085 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:06:14.955092 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:06:14.955098 kernel: Dynamic Preempt: voluntary Mar 7 01:06:14.955105 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:06:14.955112 kernel: rcu: RCU event tracing is enabled. Mar 7 01:06:14.955120 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:06:14.955127 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:06:14.955136 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:06:14.955143 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:06:14.955149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:06:14.955156 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:06:14.955163 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:06:14.955170 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:06:14.955176 kernel: Console: colour VGA+ 80x25 Mar 7 01:06:14.955183 kernel: printk: console [tty0] enabled Mar 7 01:06:14.955189 kernel: printk: console [ttyS0] enabled Mar 7 01:06:14.955199 kernel: ACPI: Core revision 20230628 Mar 7 01:06:14.955205 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:06:14.955212 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:06:14.955219 kernel: x2apic enabled Mar 7 01:06:14.955669 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:06:14.955681 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:06:14.955688 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:06:14.955695 kernel: kvm-guest: setup PV IPIs Mar 7 01:06:14.955702 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:06:14.955709 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:06:14.955715 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) Mar 7 01:06:14.955723 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:06:14.955733 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:06:14.955739 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:06:14.955746 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:06:14.955753 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:06:14.955760 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:06:14.955769 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:06:14.955776 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:06:14.955783 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:06:14.955790 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:06:14.955797 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:06:14.955804 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:06:14.955811 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:06:14.955818 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:06:14.955827 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:06:14.955834 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:06:14.955841 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:06:14.955847 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:06:14.955854 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:06:14.955861 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:06:14.955868 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:06:14.955874 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:06:14.955881 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:06:14.955891 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:06:14.955897 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:06:14.955904 kernel: landlock: Up and running. Mar 7 01:06:14.955911 kernel: SELinux: Initializing. Mar 7 01:06:14.955917 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.955924 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.955931 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:06:14.955938 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:06:14.955945 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:06:14.955954 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:06:14.955961 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:06:14.955968 kernel: ... version: 0 Mar 7 01:06:14.955974 kernel: ... bit width: 48 Mar 7 01:06:14.955981 kernel: ... generic registers: 6 Mar 7 01:06:14.955988 kernel: ... value mask: 0000ffffffffffff Mar 7 01:06:14.955994 kernel: ... max period: 00007fffffffffff Mar 7 01:06:14.956001 kernel: ... fixed-purpose events: 0 Mar 7 01:06:14.956008 kernel: ... event mask: 000000000000003f Mar 7 01:06:14.956017 kernel: signal: max sigframe size: 3376 Mar 7 01:06:14.956024 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:06:14.956031 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:06:14.956038 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:06:14.956044 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:06:14.956051 kernel: .... node #0, CPUs: #1 Mar 7 01:06:14.956058 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:06:14.956064 kernel: smpboot: Max logical packages: 1 Mar 7 01:06:14.956071 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Mar 7 01:06:14.956081 kernel: devtmpfs: initialized Mar 7 01:06:14.956087 kernel: x86/mm: Memory block size: 128MB Mar 7 01:06:14.956094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:06:14.956101 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:06:14.956108 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:06:14.956114 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:06:14.956121 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:06:14.956128 kernel: audit: type=2000 audit(1772845574.618:1): state=initialized audit_enabled=0 res=1 Mar 7 01:06:14.956135 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:06:14.956144 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:06:14.956151 kernel: cpuidle: using governor menu Mar 7 01:06:14.956157 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:06:14.956164 kernel: dca service started, version 1.12.1 Mar 7 01:06:14.956171 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:06:14.956178 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:06:14.956184 kernel: PCI: Using configuration type 1 for base access Mar 7 01:06:14.956191 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:06:14.956198 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:06:14.956207 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:06:14.956214 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:06:14.956221 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:06:14.956255 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:06:14.956262 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:06:14.956269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:06:14.956276 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:06:14.956283 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:06:14.956289 kernel: ACPI: Interpreter enabled Mar 7 01:06:14.956299 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:06:14.956306 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:06:14.956313 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:06:14.956320 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:06:14.956327 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:06:14.956333 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:06:14.956514 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:06:14.956650 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:06:14.956784 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:06:14.956794 kernel: PCI host bridge to bus 0000:00 Mar 7 01:06:14.956925 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:06:14.957041 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:06:14.957155 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:06:14.957306 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:06:14.957423 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:06:14.957543 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:06:14.957658 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:06:14.957799 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:06:14.957934 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:06:14.958060 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:06:14.958183 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:06:14.958331 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:06:14.958456 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:06:14.958589 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:06:14.958714 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:06:14.958837 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:06:14.958960 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:06:14.959092 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:06:14.959224 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:06:14.959382 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:06:14.959522 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:06:14.959648 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:06:14.959780 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:06:14.959904 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:06:14.960035 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:06:14.960167 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:06:14.960326 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:06:14.960461 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:06:14.960584 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:06:14.960594 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:06:14.960602 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:06:14.960609 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:06:14.960620 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:06:14.960628 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:06:14.960635 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:06:14.960642 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:06:14.960649 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:06:14.960657 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:06:14.960664 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:06:14.960671 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:06:14.960678 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:06:14.960688 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:06:14.960695 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:06:14.960702 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:06:14.960709 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:06:14.960716 kernel: iommu: Default domain type: Translated Mar 7 01:06:14.960724 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:06:14.960731 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:06:14.960738 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:06:14.960745 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:06:14.960754 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:06:14.960876 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:06:14.960999 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:06:14.961121 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:06:14.961130 kernel: vgaarb: loaded Mar 7 01:06:14.961138 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:06:14.961145 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:06:14.961152 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:06:14.961163 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:06:14.961170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:06:14.961177 kernel: pnp: PnP ACPI init Mar 7 01:06:14.961333 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:06:14.961345 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:06:14.961353 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:06:14.961360 kernel: NET: Registered PF_INET protocol family Mar 7 01:06:14.961367 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:06:14.961378 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:06:14.961385 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:06:14.961393 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:06:14.961400 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:06:14.961407 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:06:14.961414 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.961421 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.961428 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:06:14.961436 kernel: NET: Registered PF_XDP protocol family Mar 7 01:06:14.961554 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:06:14.961669 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:06:14.961782 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:06:14.961895 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:06:14.962008 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:06:14.962121 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:06:14.962130 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:06:14.962137 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:06:14.962148 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:06:14.962155 kernel: Initialise system trusted keyrings Mar 7 01:06:14.962161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:06:14.962168 kernel: Key type asymmetric registered Mar 7 01:06:14.962175 kernel: Asymmetric key parser 'x509' registered Mar 7 01:06:14.962182 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:06:14.962189 kernel: io scheduler mq-deadline registered Mar 7 01:06:14.962196 kernel: io scheduler kyber registered Mar 7 01:06:14.962203 kernel: io scheduler bfq registered Mar 7 01:06:14.962210 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:06:14.962220 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:06:14.962227 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:06:14.962247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:06:14.962254 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:06:14.962261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:06:14.962268 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:06:14.962275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:06:14.962282 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:06:14.962413 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:06:14.962537 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:06:14.962654 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:06:14 UTC (1772845574) Mar 7 01:06:14.962771 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:06:14.962781 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:06:14.962788 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:06:14.962795 kernel: Segment Routing with IPv6 Mar 7 01:06:14.962802 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:06:14.962813 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:06:14.962820 kernel: Key type dns_resolver registered Mar 7 01:06:14.962827 kernel: IPI shorthand broadcast: enabled Mar 7 01:06:14.962834 kernel: sched_clock: Marking stable (840002805, 311248654)->(1281883044, -130631585) Mar 7 01:06:14.962841 kernel: registered taskstats version 1 Mar 7 01:06:14.962848 kernel: Loading compiled-in X.509 certificates Mar 7 01:06:14.962856 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:06:14.962863 kernel: Key type .fscrypt registered Mar 7 01:06:14.962870 kernel: Key type fscrypt-provisioning registered Mar 7 01:06:14.962879 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:06:14.962886 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:06:14.962894 kernel: ima: No architecture policies found Mar 7 01:06:14.962901 kernel: clk: Disabling unused clocks Mar 7 01:06:14.962908 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:06:14.962915 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:06:14.962922 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:06:14.962930 kernel: Run /init as init process Mar 7 01:06:14.962937 kernel: with arguments: Mar 7 01:06:14.962946 kernel: /init Mar 7 01:06:14.962953 kernel: with environment: Mar 7 01:06:14.962960 kernel: HOME=/ Mar 7 01:06:14.962967 kernel: TERM=linux Mar 7 01:06:14.962976 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:06:14.962985 systemd[1]: Detected virtualization kvm. Mar 7 01:06:14.962993 systemd[1]: Detected architecture x86-64. Mar 7 01:06:14.963000 systemd[1]: Running in initrd. Mar 7 01:06:14.963010 systemd[1]: No hostname configured, using default hostname. Mar 7 01:06:14.963017 systemd[1]: Hostname set to . Mar 7 01:06:14.963025 systemd[1]: Initializing machine ID from random generator. Mar 7 01:06:14.963032 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:06:14.963040 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:06:14.963062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:06:14.963075 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:06:14.963083 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:06:14.963091 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:06:14.963099 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:06:14.963108 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:06:14.963116 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:06:14.963126 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:06:14.963134 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:06:14.963142 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:06:14.963150 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:06:14.963157 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:06:14.963165 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:06:14.963173 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:06:14.963181 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:06:14.963189 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:06:14.963199 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:06:14.963207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:06:14.963215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:06:14.963223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:06:14.963243 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:06:14.963251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:06:14.963259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:06:14.963266 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:06:14.963274 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:06:14.963285 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:06:14.963292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:06:14.963319 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:06:14.963336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:06:14.963347 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:06:14.963358 systemd-journald[178]: Journal started Mar 7 01:06:14.963374 systemd-journald[178]: Runtime Journal (/run/log/journal/8be35927e02c4d9da044b676e4cfb446) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:06:14.965542 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:06:14.974386 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:06:14.974020 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:06:14.976534 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:06:14.983419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:06:14.996260 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:06:14.996288 kernel: Bridge firewalling registered Mar 7 01:06:14.996031 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:06:15.002406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:06:15.079812 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:06:15.084425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:15.093390 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:06:15.096143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:06:15.098657 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:06:15.125152 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:06:15.134023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:06:15.136047 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:06:15.139538 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:15.142398 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:06:15.150352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:06:15.152490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:06:15.163641 dracut-cmdline[212]: dracut-dracut-053 Mar 7 01:06:15.167162 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:06:15.182723 systemd-resolved[213]: Positive Trust Anchors: Mar 7 01:06:15.183852 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:06:15.184482 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:06:15.190588 systemd-resolved[213]: Defaulting to hostname 'linux'. Mar 7 01:06:15.192530 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:06:15.195366 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:06:15.247276 kernel: SCSI subsystem initialized Mar 7 01:06:15.257252 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:06:15.268259 kernel: iscsi: registered transport (tcp) Mar 7 01:06:15.289025 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:06:15.289074 kernel: QLogic iSCSI HBA Driver Mar 7 01:06:15.329630 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:06:15.338392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:06:15.365408 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:06:15.365450 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:06:15.368606 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:06:15.408259 kernel: raid6: avx2x4 gen() 36229 MB/s Mar 7 01:06:15.426252 kernel: raid6: avx2x2 gen() 31381 MB/s Mar 7 01:06:15.444370 kernel: raid6: avx2x1 gen() 27495 MB/s Mar 7 01:06:15.444395 kernel: raid6: using algorithm avx2x4 gen() 36229 MB/s Mar 7 01:06:15.467599 kernel: raid6: .... xor() 5106 MB/s, rmw enabled Mar 7 01:06:15.467639 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:06:15.489256 kernel: xor: automatically using best checksumming function avx Mar 7 01:06:15.619275 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:06:15.630824 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:06:15.637363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:06:15.650708 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:06:15.655218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:06:15.666368 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:06:15.679457 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 7 01:06:15.708906 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:06:15.717366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:06:15.787029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:06:15.796415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:06:15.811147 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:06:15.813744 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:06:15.814968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:06:15.817442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:06:15.825389 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:06:15.840456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:06:15.865269 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:06:15.875248 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:06:15.881263 kernel: libata version 3.00 loaded. Mar 7 01:06:15.887248 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:06:16.092338 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:06:16.094810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:06:16.136305 kernel: AES CTR mode by8 optimization enabled Mar 7 01:06:16.129865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:16.135210 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:06:16.136017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:06:16.136743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:16.149271 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:06:16.149476 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:06:16.138504 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:06:16.146482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:06:16.162400 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:06:16.162602 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:06:16.182272 kernel: scsi host1: ahci Mar 7 01:06:16.185279 kernel: scsi host2: ahci Mar 7 01:06:16.186296 kernel: scsi host3: ahci Mar 7 01:06:16.187247 kernel: scsi host4: ahci Mar 7 01:06:16.187439 kernel: scsi host5: ahci Mar 7 01:06:16.188370 kernel: scsi host6: ahci Mar 7 01:06:16.188563 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Mar 7 01:06:16.188590 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Mar 7 01:06:16.188608 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Mar 7 01:06:16.188623 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Mar 7 01:06:16.188638 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Mar 7 01:06:16.188650 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Mar 7 01:06:16.191272 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:06:16.191473 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:06:16.191682 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:06:16.191851 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:06:16.192011 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:06:16.194836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:06:16.194863 kernel: GPT:9289727 != 167739391 Mar 7 01:06:16.194874 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:06:16.194884 kernel: GPT:9289727 != 167739391 Mar 7 01:06:16.194894 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:06:16.194903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:16.194919 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:06:16.319644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:16.337457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:06:16.361718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:16.495277 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.503254 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.503294 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.507262 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.508256 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.510261 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.553266 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (445) Mar 7 01:06:16.560266 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:06:16.562380 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (444) Mar 7 01:06:16.568630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:06:16.578522 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:06:16.583883 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:06:16.585873 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:06:16.592375 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:06:16.597521 disk-uuid[567]: Primary Header is updated. Mar 7 01:06:16.597521 disk-uuid[567]: Secondary Entries is updated. Mar 7 01:06:16.597521 disk-uuid[567]: Secondary Header is updated. Mar 7 01:06:16.605270 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:16.611254 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:17.615563 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:17.615664 disk-uuid[568]: The operation has completed successfully. Mar 7 01:06:17.673977 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:06:17.674117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:06:17.685398 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:06:17.690802 sh[582]: Success Mar 7 01:06:17.707041 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:06:17.754843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:06:17.765339 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:06:17.768384 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:06:17.787273 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:06:17.787314 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:17.792907 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:06:17.792933 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:06:17.797527 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:06:17.805246 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:06:17.807386 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:06:17.808774 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:06:17.815354 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:06:17.819387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:06:17.833460 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:17.833501 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:17.837319 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:06:17.844728 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:06:17.844768 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:06:17.855432 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:06:17.859502 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:17.866402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:06:17.873471 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:06:17.948508 ignition[682]: Ignition 2.19.0 Mar 7 01:06:17.948522 ignition[682]: Stage: fetch-offline Mar 7 01:06:17.952557 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:06:17.948575 ignition[682]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:17.948586 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:17.948676 ignition[682]: parsed url from cmdline: "" Mar 7 01:06:17.948682 ignition[682]: no config URL provided Mar 7 01:06:17.948691 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:06:17.958424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:06:17.948705 ignition[682]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:06:17.948713 ignition[682]: failed to fetch config: resource requires networking Mar 7 01:06:17.950210 ignition[682]: Ignition finished successfully Mar 7 01:06:17.967381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:06:17.989490 systemd-networkd[771]: lo: Link UP Mar 7 01:06:17.989502 systemd-networkd[771]: lo: Gained carrier Mar 7 01:06:17.991201 systemd-networkd[771]: Enumeration completed Mar 7 01:06:17.991589 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:06:17.992081 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:06:17.992086 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:06:17.994606 systemd-networkd[771]: eth0: Link UP Mar 7 01:06:17.994611 systemd-networkd[771]: eth0: Gained carrier Mar 7 01:06:17.994618 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:06:17.995127 systemd[1]: Reached target network.target - Network. Mar 7 01:06:18.001638 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:06:18.016988 ignition[773]: Ignition 2.19.0 Mar 7 01:06:18.017600 ignition[773]: Stage: fetch Mar 7 01:06:18.017766 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:18.017779 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:18.017890 ignition[773]: parsed url from cmdline: "" Mar 7 01:06:18.017894 ignition[773]: no config URL provided Mar 7 01:06:18.017900 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:06:18.017912 ignition[773]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:06:18.017929 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:06:18.018107 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:06:18.218869 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:06:18.219122 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:06:18.619497 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:06:18.619685 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:06:18.701320 systemd-networkd[771]: eth0: DHCPv4 address 172.239.198.121/24, gateway 172.239.198.1 acquired from 23.213.15.213 Mar 7 01:06:19.420510 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:06:19.590417 systemd-networkd[771]: eth0: Gained IPv6LL Mar 7 01:06:19.666855 ignition[773]: PUT result: OK Mar 7 01:06:19.666921 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:06:19.778781 ignition[773]: GET result: OK Mar 7 01:06:19.778892 ignition[773]: parsing config with SHA512: e25e15d792283a9c56278a662e8cc0a4682f67670a0c107fa124921d663bdb0d57992de37c9f4ebd7ee9c7b2c9d4304a4473365e464b5f2fe108819a1bd0faad Mar 7 01:06:19.782537 unknown[773]: fetched base config from "system" Mar 7 01:06:19.782551 unknown[773]: fetched base config from "system" Mar 7 01:06:19.783249 ignition[773]: fetch: fetch complete Mar 7 01:06:19.782558 unknown[773]: fetched user config from "akamai" Mar 7 01:06:19.783256 ignition[773]: fetch: fetch passed Mar 7 01:06:19.783307 ignition[773]: Ignition finished successfully Mar 7 01:06:19.787046 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:06:19.805418 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:06:19.821378 ignition[780]: Ignition 2.19.0 Mar 7 01:06:19.821399 ignition[780]: Stage: kargs Mar 7 01:06:19.821614 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:19.821633 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:19.830557 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:06:19.822545 ignition[780]: kargs: kargs passed Mar 7 01:06:19.822593 ignition[780]: Ignition finished successfully Mar 7 01:06:19.843395 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:06:19.855849 ignition[786]: Ignition 2.19.0 Mar 7 01:06:19.855864 ignition[786]: Stage: disks Mar 7 01:06:19.866467 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:06:19.856079 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:19.882018 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:06:19.856091 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:19.883954 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:06:19.857368 ignition[786]: disks: disks passed Mar 7 01:06:19.884920 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:06:19.857412 ignition[786]: Ignition finished successfully Mar 7 01:06:19.885852 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:06:19.887697 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:06:19.895434 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:06:19.916579 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:06:19.921593 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:06:19.928491 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:06:20.018269 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:06:20.019148 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:06:20.020826 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:06:20.026301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:06:20.029342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:06:20.031157 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:06:20.032577 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:06:20.032601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:06:20.041829 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Mar 7 01:06:20.041855 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:20.046257 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:20.048364 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:06:20.048997 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:06:20.052710 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:06:20.058255 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:06:20.058296 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:06:20.071334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:06:20.116257 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:06:20.122407 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:06:20.127006 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:06:20.131677 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:06:20.224298 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:06:20.230323 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:06:20.233367 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:06:20.242269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:06:20.246170 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:20.268578 ignition[916]: INFO : Ignition 2.19.0 Mar 7 01:06:20.270123 ignition[916]: INFO : Stage: mount Mar 7 01:06:20.270123 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:20.270123 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:20.269859 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:06:20.276999 ignition[916]: INFO : mount: mount passed Mar 7 01:06:20.276999 ignition[916]: INFO : Ignition finished successfully Mar 7 01:06:20.274188 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:06:20.282335 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:06:21.024411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:06:21.040393 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (927) Mar 7 01:06:21.040444 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:21.043780 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:21.046567 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:06:21.053293 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:06:21.053323 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:06:21.058491 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:06:21.087215 ignition[943]: INFO : Ignition 2.19.0 Mar 7 01:06:21.087215 ignition[943]: INFO : Stage: files Mar 7 01:06:21.089382 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:21.089382 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:21.089382 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:06:21.092763 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:06:21.092763 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:06:21.095009 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:06:21.095009 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:06:21.097165 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:06:21.097165 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:06:21.097165 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:06:21.095140 unknown[943]: wrote ssh authorized keys file for user: core Mar 7 01:06:21.412662 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:06:21.539034 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:06:21.539034 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:06:22.078758 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:06:22.516520 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:22.516520 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:06:22.521653 ignition[943]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:06:22.521653 ignition[943]: INFO : files: files passed Mar 7 01:06:22.521653 ignition[943]: INFO : Ignition finished successfully Mar 7 01:06:22.521854 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:06:22.553441 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:06:22.561381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:06:22.568797 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:06:22.568940 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:06:22.576515 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:06:22.578209 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:06:22.580038 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:06:22.582706 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:06:22.584900 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:06:22.590366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:06:22.625522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:06:22.625683 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:06:22.627644 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:06:22.628926 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:06:22.630604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:06:22.642377 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:06:22.656454 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:06:22.664411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:06:22.673975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:06:22.675394 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:06:22.676728 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:06:22.678407 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:06:22.678524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:06:22.680528 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:06:22.681674 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:06:22.683442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:06:22.685096 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:06:22.686989 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:06:22.688990 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:06:22.690923 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:06:22.692780 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:06:22.694626 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:06:22.696476 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:06:22.698259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:06:22.698373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:06:22.700184 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:06:22.701444 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:06:22.702913 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:06:22.703028 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:06:22.704670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:06:22.704772 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:06:22.707209 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:06:22.707361 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:06:22.708649 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:06:22.708767 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:06:22.718382 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:06:22.721386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:06:22.722784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:06:22.722900 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:06:22.727504 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:06:22.727717 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:06:22.737065 ignition[997]: INFO : Ignition 2.19.0 Mar 7 01:06:22.738311 ignition[997]: INFO : Stage: umount Mar 7 01:06:22.739681 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:22.739681 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:22.739628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:06:22.739741 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:06:22.746973 ignition[997]: INFO : umount: umount passed Mar 7 01:06:22.746973 ignition[997]: INFO : Ignition finished successfully Mar 7 01:06:22.746554 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:06:22.747281 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:06:22.753593 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:06:22.753661 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:06:22.756432 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:06:22.756497 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:06:22.760210 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:06:22.760311 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:06:22.761217 systemd[1]: Stopped target network.target - Network. Mar 7 01:06:22.762004 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:06:22.762070 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:06:22.765427 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:06:22.766758 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:06:22.784281 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:06:22.793852 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:06:22.795735 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:06:22.797360 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:06:22.797435 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:06:22.799149 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:06:22.799210 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:06:22.800962 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:06:22.801042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:06:22.802544 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:06:22.802617 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:06:22.804473 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:06:22.806188 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:06:22.809599 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:06:22.810566 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:06:22.810747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:06:22.811285 systemd-networkd[771]: eth0: DHCPv6 lease lost Mar 7 01:06:22.814761 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:06:22.814928 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:06:22.818378 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:06:22.818523 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:06:22.824115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:06:22.824179 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:06:22.825556 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:06:22.825615 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:06:22.833346 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:06:22.835131 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:06:22.835202 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:06:22.838682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:06:22.838735 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:06:22.839612 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:06:22.839663 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:06:22.841586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:06:22.841821 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:06:22.843250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:06:22.858151 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:06:22.858863 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:06:22.861896 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:06:22.862078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:06:22.863896 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:06:22.863947 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:06:22.865224 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:06:22.865338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:06:22.866929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:06:22.866984 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:06:22.869252 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:06:22.869305 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:06:22.870868 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:06:22.870918 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:22.878402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:06:22.880731 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:06:22.880788 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:06:22.883441 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:06:22.883496 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:06:22.884818 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:06:22.884871 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:06:22.886320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:06:22.886385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:22.888311 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:06:22.888429 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:06:22.890507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:06:22.901759 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:06:22.909680 systemd[1]: Switching root. Mar 7 01:06:22.940271 systemd-journald[178]: Journal stopped Mar 7 01:06:14.954054 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:06:14.954075 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:06:14.954084 kernel: BIOS-provided physical RAM map: Mar 7 01:06:14.954090 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:06:14.954095 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:06:14.954103 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:06:14.954110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:06:14.954116 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:06:14.954122 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:06:14.954127 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:06:14.954133 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:06:14.954139 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:06:14.954145 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:06:14.954153 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:06:14.954160 kernel: NX (Execute Disable) protection: active Mar 7 01:06:14.954166 kernel: APIC: Static calls initialized Mar 7 01:06:14.954172 kernel: SMBIOS 2.8 present. Mar 7 01:06:14.954179 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:06:14.954185 kernel: Hypervisor detected: KVM Mar 7 01:06:14.954193 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:06:14.954199 kernel: kvm-clock: using sched offset of 5827120385 cycles Mar 7 01:06:14.954206 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:06:14.954212 kernel: tsc: Detected 1999.996 MHz processor Mar 7 01:06:14.954219 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:06:14.954226 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:06:14.954568 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:06:14.954575 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:06:14.954582 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:06:14.954592 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:06:14.954598 kernel: Using GB pages for direct mapping Mar 7 01:06:14.954605 kernel: ACPI: Early table checksum verification disabled Mar 7 01:06:14.954611 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:06:14.954617 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954624 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954630 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954637 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:06:14.954643 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954652 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954658 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954665 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:06:14.954675 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:06:14.954682 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:06:14.954689 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:06:14.954698 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:06:14.954705 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:06:14.954712 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:06:14.954719 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:06:14.954725 kernel: No NUMA configuration found Mar 7 01:06:14.954732 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:06:14.954739 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 7 01:06:14.954746 kernel: Zone ranges: Mar 7 01:06:14.954755 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:06:14.954762 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:06:14.954769 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:06:14.954775 kernel: Movable zone start for each node Mar 7 01:06:14.954782 kernel: Early memory node ranges Mar 7 01:06:14.954789 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:06:14.954795 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:06:14.954802 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:06:14.954809 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:06:14.954815 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:06:14.954825 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:06:14.954832 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:06:14.954838 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:06:14.954845 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:06:14.954852 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:06:14.954859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:06:14.954865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:06:14.954872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:06:14.954879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:06:14.954888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:06:14.954895 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:06:14.954902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:06:14.954908 kernel: TSC deadline timer available Mar 7 01:06:14.954915 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:06:14.954922 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:06:14.954928 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:06:14.954935 kernel: kvm-guest: setup PV sched yield Mar 7 01:06:14.954942 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:06:14.954951 kernel: Booting paravirtualized kernel on KVM Mar 7 01:06:14.954958 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:06:14.954965 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:06:14.954971 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:06:14.954978 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:06:14.954985 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:06:14.954991 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:06:14.954998 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:06:14.955006 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:06:14.955015 kernel: random: crng init done Mar 7 01:06:14.955022 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:06:14.955029 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:06:14.955035 kernel: Fallback order for Node 0: 0 Mar 7 01:06:14.955042 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:06:14.955049 kernel: Policy zone: Normal Mar 7 01:06:14.955055 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:06:14.955062 kernel: software IO TLB: area num 2. Mar 7 01:06:14.955072 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Mar 7 01:06:14.955078 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:06:14.955085 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:06:14.955092 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:06:14.955098 kernel: Dynamic Preempt: voluntary Mar 7 01:06:14.955105 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:06:14.955112 kernel: rcu: RCU event tracing is enabled. Mar 7 01:06:14.955120 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:06:14.955127 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:06:14.955136 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:06:14.955143 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:06:14.955149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:06:14.955156 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:06:14.955163 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:06:14.955170 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:06:14.955176 kernel: Console: colour VGA+ 80x25 Mar 7 01:06:14.955183 kernel: printk: console [tty0] enabled Mar 7 01:06:14.955189 kernel: printk: console [ttyS0] enabled Mar 7 01:06:14.955199 kernel: ACPI: Core revision 20230628 Mar 7 01:06:14.955205 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:06:14.955212 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:06:14.955219 kernel: x2apic enabled Mar 7 01:06:14.955669 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:06:14.955681 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:06:14.955688 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:06:14.955695 kernel: kvm-guest: setup PV IPIs Mar 7 01:06:14.955702 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:06:14.955709 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:06:14.955715 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) Mar 7 01:06:14.955723 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:06:14.955733 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:06:14.955739 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:06:14.955746 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:06:14.955753 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:06:14.955760 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:06:14.955769 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:06:14.955776 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:06:14.955783 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:06:14.955790 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:06:14.955797 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:06:14.955804 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:06:14.955811 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:06:14.955818 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:06:14.955827 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:06:14.955834 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:06:14.955841 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:06:14.955847 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:06:14.955854 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:06:14.955861 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:06:14.955868 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:06:14.955874 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:06:14.955881 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:06:14.955891 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:06:14.955897 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:06:14.955904 kernel: landlock: Up and running. Mar 7 01:06:14.955911 kernel: SELinux: Initializing. Mar 7 01:06:14.955917 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.955924 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.955931 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:06:14.955938 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:06:14.955945 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:06:14.955954 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:06:14.955961 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:06:14.955968 kernel: ... version: 0 Mar 7 01:06:14.955974 kernel: ... bit width: 48 Mar 7 01:06:14.955981 kernel: ... generic registers: 6 Mar 7 01:06:14.955988 kernel: ... value mask: 0000ffffffffffff Mar 7 01:06:14.955994 kernel: ... max period: 00007fffffffffff Mar 7 01:06:14.956001 kernel: ... fixed-purpose events: 0 Mar 7 01:06:14.956008 kernel: ... event mask: 000000000000003f Mar 7 01:06:14.956017 kernel: signal: max sigframe size: 3376 Mar 7 01:06:14.956024 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:06:14.956031 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:06:14.956038 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:06:14.956044 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:06:14.956051 kernel: .... node #0, CPUs: #1 Mar 7 01:06:14.956058 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:06:14.956064 kernel: smpboot: Max logical packages: 1 Mar 7 01:06:14.956071 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Mar 7 01:06:14.956081 kernel: devtmpfs: initialized Mar 7 01:06:14.956087 kernel: x86/mm: Memory block size: 128MB Mar 7 01:06:14.956094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:06:14.956101 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:06:14.956108 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:06:14.956114 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:06:14.956121 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:06:14.956128 kernel: audit: type=2000 audit(1772845574.618:1): state=initialized audit_enabled=0 res=1 Mar 7 01:06:14.956135 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:06:14.956144 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:06:14.956151 kernel: cpuidle: using governor menu Mar 7 01:06:14.956157 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:06:14.956164 kernel: dca service started, version 1.12.1 Mar 7 01:06:14.956171 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:06:14.956178 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:06:14.956184 kernel: PCI: Using configuration type 1 for base access Mar 7 01:06:14.956191 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:06:14.956198 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:06:14.956207 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:06:14.956214 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:06:14.956221 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:06:14.956255 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:06:14.956262 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:06:14.956269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:06:14.956276 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:06:14.956283 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:06:14.956289 kernel: ACPI: Interpreter enabled Mar 7 01:06:14.956299 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:06:14.956306 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:06:14.956313 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:06:14.956320 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:06:14.956327 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:06:14.956333 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:06:14.956514 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:06:14.956650 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:06:14.956784 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:06:14.956794 kernel: PCI host bridge to bus 0000:00 Mar 7 01:06:14.956925 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:06:14.957041 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:06:14.957155 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:06:14.957306 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:06:14.957423 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:06:14.957543 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:06:14.957658 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:06:14.957799 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:06:14.957934 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:06:14.958060 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:06:14.958183 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:06:14.958331 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:06:14.958456 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:06:14.958589 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:06:14.958714 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:06:14.958837 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:06:14.958960 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:06:14.959092 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:06:14.959224 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:06:14.959382 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:06:14.959522 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:06:14.959648 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:06:14.959780 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:06:14.959904 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:06:14.960035 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:06:14.960167 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:06:14.960326 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:06:14.960461 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:06:14.960584 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:06:14.960594 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:06:14.960602 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:06:14.960609 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:06:14.960620 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:06:14.960628 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:06:14.960635 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:06:14.960642 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:06:14.960649 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:06:14.960657 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:06:14.960664 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:06:14.960671 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:06:14.960678 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:06:14.960688 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:06:14.960695 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:06:14.960702 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:06:14.960709 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:06:14.960716 kernel: iommu: Default domain type: Translated Mar 7 01:06:14.960724 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:06:14.960731 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:06:14.960738 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:06:14.960745 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:06:14.960754 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:06:14.960876 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:06:14.960999 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:06:14.961121 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:06:14.961130 kernel: vgaarb: loaded Mar 7 01:06:14.961138 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:06:14.961145 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:06:14.961152 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:06:14.961163 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:06:14.961170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:06:14.961177 kernel: pnp: PnP ACPI init Mar 7 01:06:14.961333 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:06:14.961345 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:06:14.961353 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:06:14.961360 kernel: NET: Registered PF_INET protocol family Mar 7 01:06:14.961367 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:06:14.961378 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:06:14.961385 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:06:14.961393 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:06:14.961400 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:06:14.961407 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:06:14.961414 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.961421 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:06:14.961428 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:06:14.961436 kernel: NET: Registered PF_XDP protocol family Mar 7 01:06:14.961554 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:06:14.961669 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:06:14.961782 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:06:14.961895 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:06:14.962008 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:06:14.962121 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:06:14.962130 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:06:14.962137 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:06:14.962148 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:06:14.962155 kernel: Initialise system trusted keyrings Mar 7 01:06:14.962161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:06:14.962168 kernel: Key type asymmetric registered Mar 7 01:06:14.962175 kernel: Asymmetric key parser 'x509' registered Mar 7 01:06:14.962182 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:06:14.962189 kernel: io scheduler mq-deadline registered Mar 7 01:06:14.962196 kernel: io scheduler kyber registered Mar 7 01:06:14.962203 kernel: io scheduler bfq registered Mar 7 01:06:14.962210 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:06:14.962220 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:06:14.962227 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:06:14.962247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:06:14.962254 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:06:14.962261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:06:14.962268 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:06:14.962275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:06:14.962282 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:06:14.962413 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:06:14.962537 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:06:14.962654 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:06:14 UTC (1772845574) Mar 7 01:06:14.962771 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:06:14.962781 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:06:14.962788 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:06:14.962795 kernel: Segment Routing with IPv6 Mar 7 01:06:14.962802 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:06:14.962813 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:06:14.962820 kernel: Key type dns_resolver registered Mar 7 01:06:14.962827 kernel: IPI shorthand broadcast: enabled Mar 7 01:06:14.962834 kernel: sched_clock: Marking stable (840002805, 311248654)->(1281883044, -130631585) Mar 7 01:06:14.962841 kernel: registered taskstats version 1 Mar 7 01:06:14.962848 kernel: Loading compiled-in X.509 certificates Mar 7 01:06:14.962856 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:06:14.962863 kernel: Key type .fscrypt registered Mar 7 01:06:14.962870 kernel: Key type fscrypt-provisioning registered Mar 7 01:06:14.962879 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:06:14.962886 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:06:14.962894 kernel: ima: No architecture policies found Mar 7 01:06:14.962901 kernel: clk: Disabling unused clocks Mar 7 01:06:14.962908 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:06:14.962915 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:06:14.962922 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:06:14.962930 kernel: Run /init as init process Mar 7 01:06:14.962937 kernel: with arguments: Mar 7 01:06:14.962946 kernel: /init Mar 7 01:06:14.962953 kernel: with environment: Mar 7 01:06:14.962960 kernel: HOME=/ Mar 7 01:06:14.962967 kernel: TERM=linux Mar 7 01:06:14.962976 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:06:14.962985 systemd[1]: Detected virtualization kvm. Mar 7 01:06:14.962993 systemd[1]: Detected architecture x86-64. Mar 7 01:06:14.963000 systemd[1]: Running in initrd. Mar 7 01:06:14.963010 systemd[1]: No hostname configured, using default hostname. Mar 7 01:06:14.963017 systemd[1]: Hostname set to . Mar 7 01:06:14.963025 systemd[1]: Initializing machine ID from random generator. Mar 7 01:06:14.963032 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:06:14.963040 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:06:14.963062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:06:14.963075 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:06:14.963083 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:06:14.963091 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:06:14.963099 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:06:14.963108 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:06:14.963116 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:06:14.963126 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:06:14.963134 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:06:14.963142 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:06:14.963150 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:06:14.963157 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:06:14.963165 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:06:14.963173 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:06:14.963181 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:06:14.963189 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:06:14.963199 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:06:14.963207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:06:14.963215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:06:14.963223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:06:14.963243 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:06:14.963251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:06:14.963259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:06:14.963266 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:06:14.963274 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:06:14.963285 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:06:14.963292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:06:14.963319 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:06:14.963336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:06:14.963347 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:06:14.963358 systemd-journald[178]: Journal started Mar 7 01:06:14.963374 systemd-journald[178]: Runtime Journal (/run/log/journal/8be35927e02c4d9da044b676e4cfb446) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:06:14.965542 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:06:14.974386 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:06:14.974020 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:06:14.976534 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:06:14.983419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:06:14.996260 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:06:14.996288 kernel: Bridge firewalling registered Mar 7 01:06:14.996031 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:06:15.002406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:06:15.079812 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:06:15.084425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:15.093390 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:06:15.096143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:06:15.098657 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:06:15.125152 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:06:15.134023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:06:15.136047 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:06:15.139538 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:15.142398 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:06:15.150352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:06:15.152490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:06:15.163641 dracut-cmdline[212]: dracut-dracut-053 Mar 7 01:06:15.167162 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:06:15.182723 systemd-resolved[213]: Positive Trust Anchors: Mar 7 01:06:15.183852 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:06:15.184482 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:06:15.190588 systemd-resolved[213]: Defaulting to hostname 'linux'. Mar 7 01:06:15.192530 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:06:15.195366 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:06:15.247276 kernel: SCSI subsystem initialized Mar 7 01:06:15.257252 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:06:15.268259 kernel: iscsi: registered transport (tcp) Mar 7 01:06:15.289025 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:06:15.289074 kernel: QLogic iSCSI HBA Driver Mar 7 01:06:15.329630 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:06:15.338392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:06:15.365408 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:06:15.365450 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:06:15.368606 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:06:15.408259 kernel: raid6: avx2x4 gen() 36229 MB/s Mar 7 01:06:15.426252 kernel: raid6: avx2x2 gen() 31381 MB/s Mar 7 01:06:15.444370 kernel: raid6: avx2x1 gen() 27495 MB/s Mar 7 01:06:15.444395 kernel: raid6: using algorithm avx2x4 gen() 36229 MB/s Mar 7 01:06:15.467599 kernel: raid6: .... xor() 5106 MB/s, rmw enabled Mar 7 01:06:15.467639 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:06:15.489256 kernel: xor: automatically using best checksumming function avx Mar 7 01:06:15.619275 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:06:15.630824 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:06:15.637363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:06:15.650708 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:06:15.655218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:06:15.666368 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:06:15.679457 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 7 01:06:15.708906 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:06:15.717366 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:06:15.787029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:06:15.796415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:06:15.811147 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:06:15.813744 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:06:15.814968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:06:15.817442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:06:15.825389 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:06:15.840456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:06:15.865269 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:06:15.875248 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:06:15.881263 kernel: libata version 3.00 loaded. Mar 7 01:06:15.887248 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:06:16.092338 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:06:16.094810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:06:16.136305 kernel: AES CTR mode by8 optimization enabled Mar 7 01:06:16.129865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:16.135210 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:06:16.136017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:06:16.136743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:16.149271 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:06:16.149476 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:06:16.138504 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:06:16.146482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:06:16.162400 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:06:16.162602 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:06:16.182272 kernel: scsi host1: ahci Mar 7 01:06:16.185279 kernel: scsi host2: ahci Mar 7 01:06:16.186296 kernel: scsi host3: ahci Mar 7 01:06:16.187247 kernel: scsi host4: ahci Mar 7 01:06:16.187439 kernel: scsi host5: ahci Mar 7 01:06:16.188370 kernel: scsi host6: ahci Mar 7 01:06:16.188563 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Mar 7 01:06:16.188590 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Mar 7 01:06:16.188608 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Mar 7 01:06:16.188623 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Mar 7 01:06:16.188638 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Mar 7 01:06:16.188650 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Mar 7 01:06:16.191272 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:06:16.191473 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:06:16.191682 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:06:16.191851 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:06:16.192011 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:06:16.194836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:06:16.194863 kernel: GPT:9289727 != 167739391 Mar 7 01:06:16.194874 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:06:16.194884 kernel: GPT:9289727 != 167739391 Mar 7 01:06:16.194894 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:06:16.194903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:16.194919 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:06:16.319644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:16.337457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:06:16.361718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:16.495277 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.503254 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.503294 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.507262 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.508256 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.510261 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:06:16.553266 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (445) Mar 7 01:06:16.560266 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:06:16.562380 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (444) Mar 7 01:06:16.568630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:06:16.578522 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:06:16.583883 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:06:16.585873 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:06:16.592375 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:06:16.597521 disk-uuid[567]: Primary Header is updated. Mar 7 01:06:16.597521 disk-uuid[567]: Secondary Entries is updated. Mar 7 01:06:16.597521 disk-uuid[567]: Secondary Header is updated. Mar 7 01:06:16.605270 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:16.611254 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:17.615563 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:06:17.615664 disk-uuid[568]: The operation has completed successfully. Mar 7 01:06:17.673977 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:06:17.674117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:06:17.685398 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:06:17.690802 sh[582]: Success Mar 7 01:06:17.707041 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:06:17.754843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:06:17.765339 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:06:17.768384 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:06:17.787273 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:06:17.787314 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:17.792907 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:06:17.792933 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:06:17.797527 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:06:17.805246 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:06:17.807386 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:06:17.808774 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:06:17.815354 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:06:17.819387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:06:17.833460 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:17.833501 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:17.837319 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:06:17.844728 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:06:17.844768 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:06:17.855432 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:06:17.859502 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:17.866402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:06:17.873471 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:06:17.948508 ignition[682]: Ignition 2.19.0 Mar 7 01:06:17.948522 ignition[682]: Stage: fetch-offline Mar 7 01:06:17.952557 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:06:17.948575 ignition[682]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:17.948586 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:17.948676 ignition[682]: parsed url from cmdline: "" Mar 7 01:06:17.948682 ignition[682]: no config URL provided Mar 7 01:06:17.948691 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:06:17.958424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:06:17.948705 ignition[682]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:06:17.948713 ignition[682]: failed to fetch config: resource requires networking Mar 7 01:06:17.950210 ignition[682]: Ignition finished successfully Mar 7 01:06:17.967381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:06:17.989490 systemd-networkd[771]: lo: Link UP Mar 7 01:06:17.989502 systemd-networkd[771]: lo: Gained carrier Mar 7 01:06:17.991201 systemd-networkd[771]: Enumeration completed Mar 7 01:06:17.991589 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:06:17.992081 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:06:17.992086 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:06:17.994606 systemd-networkd[771]: eth0: Link UP Mar 7 01:06:17.994611 systemd-networkd[771]: eth0: Gained carrier Mar 7 01:06:17.994618 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:06:17.995127 systemd[1]: Reached target network.target - Network. Mar 7 01:06:18.001638 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:06:18.016988 ignition[773]: Ignition 2.19.0 Mar 7 01:06:18.017600 ignition[773]: Stage: fetch Mar 7 01:06:18.017766 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:18.017779 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:18.017890 ignition[773]: parsed url from cmdline: "" Mar 7 01:06:18.017894 ignition[773]: no config URL provided Mar 7 01:06:18.017900 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:06:18.017912 ignition[773]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:06:18.017929 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:06:18.018107 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:06:18.218869 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:06:18.219122 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:06:18.619497 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:06:18.619685 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:06:18.701320 systemd-networkd[771]: eth0: DHCPv4 address 172.239.198.121/24, gateway 172.239.198.1 acquired from 23.213.15.213 Mar 7 01:06:19.420510 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:06:19.590417 systemd-networkd[771]: eth0: Gained IPv6LL Mar 7 01:06:19.666855 ignition[773]: PUT result: OK Mar 7 01:06:19.666921 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:06:19.778781 ignition[773]: GET result: OK Mar 7 01:06:19.778892 ignition[773]: parsing config with SHA512: e25e15d792283a9c56278a662e8cc0a4682f67670a0c107fa124921d663bdb0d57992de37c9f4ebd7ee9c7b2c9d4304a4473365e464b5f2fe108819a1bd0faad Mar 7 01:06:19.782537 unknown[773]: fetched base config from "system" Mar 7 01:06:19.782551 unknown[773]: fetched base config from "system" Mar 7 01:06:19.783249 ignition[773]: fetch: fetch complete Mar 7 01:06:19.782558 unknown[773]: fetched user config from "akamai" Mar 7 01:06:19.783256 ignition[773]: fetch: fetch passed Mar 7 01:06:19.783307 ignition[773]: Ignition finished successfully Mar 7 01:06:19.787046 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:06:19.805418 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:06:19.821378 ignition[780]: Ignition 2.19.0 Mar 7 01:06:19.821399 ignition[780]: Stage: kargs Mar 7 01:06:19.821614 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:19.821633 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:19.830557 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:06:19.822545 ignition[780]: kargs: kargs passed Mar 7 01:06:19.822593 ignition[780]: Ignition finished successfully Mar 7 01:06:19.843395 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:06:19.855849 ignition[786]: Ignition 2.19.0 Mar 7 01:06:19.855864 ignition[786]: Stage: disks Mar 7 01:06:19.866467 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:06:19.856079 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:19.882018 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:06:19.856091 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:19.883954 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:06:19.857368 ignition[786]: disks: disks passed Mar 7 01:06:19.884920 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:06:19.857412 ignition[786]: Ignition finished successfully Mar 7 01:06:19.885852 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:06:19.887697 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:06:19.895434 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:06:19.916579 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:06:19.921593 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:06:19.928491 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:06:20.018269 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:06:20.019148 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:06:20.020826 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:06:20.026301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:06:20.029342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:06:20.031157 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:06:20.032577 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:06:20.032601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:06:20.041829 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Mar 7 01:06:20.041855 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:20.046257 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:20.048364 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:06:20.048997 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:06:20.052710 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:06:20.058255 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:06:20.058296 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:06:20.071334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:06:20.116257 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:06:20.122407 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:06:20.127006 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:06:20.131677 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:06:20.224298 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:06:20.230323 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:06:20.233367 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:06:20.242269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:06:20.246170 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:20.268578 ignition[916]: INFO : Ignition 2.19.0 Mar 7 01:06:20.270123 ignition[916]: INFO : Stage: mount Mar 7 01:06:20.270123 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:20.270123 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:20.269859 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:06:20.276999 ignition[916]: INFO : mount: mount passed Mar 7 01:06:20.276999 ignition[916]: INFO : Ignition finished successfully Mar 7 01:06:20.274188 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:06:20.282335 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:06:21.024411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:06:21.040393 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (927) Mar 7 01:06:21.040444 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:06:21.043780 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:06:21.046567 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:06:21.053293 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:06:21.053323 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:06:21.058491 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:06:21.087215 ignition[943]: INFO : Ignition 2.19.0 Mar 7 01:06:21.087215 ignition[943]: INFO : Stage: files Mar 7 01:06:21.089382 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:21.089382 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:21.089382 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:06:21.092763 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:06:21.092763 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:06:21.095009 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:06:21.095009 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:06:21.097165 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:06:21.097165 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:06:21.097165 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:06:21.095140 unknown[943]: wrote ssh authorized keys file for user: core Mar 7 01:06:21.412662 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:06:21.539034 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:06:21.539034 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:21.542280 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:06:22.078758 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:06:22.516520 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:06:22.516520 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:06:22.521653 ignition[943]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:06:22.521653 ignition[943]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:06:22.521653 ignition[943]: INFO : files: files passed Mar 7 01:06:22.521653 ignition[943]: INFO : Ignition finished successfully Mar 7 01:06:22.521854 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:06:22.553441 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:06:22.561381 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:06:22.568797 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:06:22.568940 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:06:22.576515 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:06:22.578209 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:06:22.580038 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:06:22.582706 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:06:22.584900 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:06:22.590366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:06:22.625522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:06:22.625683 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:06:22.627644 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:06:22.628926 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:06:22.630604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:06:22.642377 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:06:22.656454 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:06:22.664411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:06:22.673975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:06:22.675394 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:06:22.676728 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:06:22.678407 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:06:22.678524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:06:22.680528 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:06:22.681674 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:06:22.683442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:06:22.685096 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:06:22.686989 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:06:22.688990 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:06:22.690923 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:06:22.692780 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:06:22.694626 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:06:22.696476 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:06:22.698259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:06:22.698373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:06:22.700184 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:06:22.701444 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:06:22.702913 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:06:22.703028 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:06:22.704670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:06:22.704772 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:06:22.707209 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:06:22.707361 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:06:22.708649 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:06:22.708767 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:06:22.718382 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:06:22.721386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:06:22.722784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:06:22.722900 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:06:22.727504 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:06:22.727717 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:06:22.737065 ignition[997]: INFO : Ignition 2.19.0 Mar 7 01:06:22.738311 ignition[997]: INFO : Stage: umount Mar 7 01:06:22.739681 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:06:22.739681 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:06:22.739628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:06:22.739741 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:06:22.746973 ignition[997]: INFO : umount: umount passed Mar 7 01:06:22.746973 ignition[997]: INFO : Ignition finished successfully Mar 7 01:06:22.746554 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:06:22.747281 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:06:22.753593 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:06:22.753661 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:06:22.756432 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:06:22.756497 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:06:22.760210 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:06:22.760311 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:06:22.761217 systemd[1]: Stopped target network.target - Network. Mar 7 01:06:22.762004 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:06:22.762070 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:06:22.765427 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:06:22.766758 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:06:22.784281 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:06:22.793852 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:06:22.795735 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:06:22.797360 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:06:22.797435 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:06:22.799149 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:06:22.799210 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:06:22.800962 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:06:22.801042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:06:22.802544 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:06:22.802617 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:06:22.804473 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:06:22.806188 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:06:22.809599 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:06:22.810566 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:06:22.810747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:06:22.811285 systemd-networkd[771]: eth0: DHCPv6 lease lost Mar 7 01:06:22.814761 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:06:22.814928 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:06:22.818378 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:06:22.818523 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:06:22.824115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:06:22.824179 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:06:22.825556 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:06:22.825615 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:06:22.833346 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:06:22.835131 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:06:22.835202 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:06:22.838682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:06:22.838735 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:06:22.839612 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:06:22.839663 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:06:22.841586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:06:22.841821 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:06:22.843250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:06:22.858151 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:06:22.858863 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:06:22.861896 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:06:22.862078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:06:22.863896 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:06:22.863947 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:06:22.865224 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:06:22.865338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:06:22.866929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:06:22.866984 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:06:22.869252 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:06:22.869305 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:06:22.870868 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:06:22.870918 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:06:22.878402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:06:22.880731 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:06:22.880788 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:06:22.883441 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:06:22.883496 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:06:22.884818 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:06:22.884871 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:06:22.886320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:06:22.886385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:22.888311 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:06:22.888429 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:06:22.890507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:06:22.901759 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:06:22.909680 systemd[1]: Switching root. Mar 7 01:06:22.940271 systemd-journald[178]: Journal stopped Mar 7 01:06:24.097839 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Mar 7 01:06:24.097871 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:06:24.097884 kernel: SELinux: policy capability open_perms=1 Mar 7 01:06:24.097894 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:06:24.097907 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:06:24.097917 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:06:24.097928 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:06:24.097938 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:06:24.097948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:06:24.097957 kernel: audit: type=1403 audit(1772845583.100:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:06:24.098025 systemd[1]: Successfully loaded SELinux policy in 59.303ms. Mar 7 01:06:24.098041 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.621ms. Mar 7 01:06:24.098054 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:06:24.098065 systemd[1]: Detected virtualization kvm. Mar 7 01:06:24.098076 systemd[1]: Detected architecture x86-64. Mar 7 01:06:24.098090 systemd[1]: Detected first boot. Mar 7 01:06:24.098115 systemd[1]: Initializing machine ID from random generator. Mar 7 01:06:24.098127 zram_generator::config[1040]: No configuration found. Mar 7 01:06:24.098139 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:06:24.098150 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:06:24.098160 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:06:24.098171 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:06:24.098183 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:06:24.098197 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:06:24.098208 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:06:24.098219 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:06:24.098245 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:06:24.098257 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:06:24.098269 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:06:24.098280 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:06:24.098294 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:06:24.098305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:06:24.098317 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:06:24.098328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:06:24.098339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:06:24.098352 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:06:24.098363 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:06:24.098377 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:06:24.098396 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:06:24.098408 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:06:24.098422 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:06:24.098434 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:06:24.098446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:06:24.098457 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:06:24.098469 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:06:24.098480 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:06:24.098494 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:06:24.098505 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:06:24.098517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:06:24.098528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:06:24.098546 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:06:24.098561 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:06:24.098572 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:06:24.098584 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:06:24.098595 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:06:24.098607 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:06:24.098620 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:06:24.098631 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:06:24.098648 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:06:24.098663 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:06:24.098674 systemd[1]: Reached target machines.target - Containers. Mar 7 01:06:24.098686 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:06:24.098697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:06:24.098709 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:06:24.098720 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:06:24.098732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:06:24.098743 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:06:24.098757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:06:24.098791 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:06:24.098803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:06:24.098815 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:06:24.098826 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:06:24.098838 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:06:24.098850 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:06:24.098862 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:06:24.098876 kernel: loop: module loaded Mar 7 01:06:24.098887 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:06:24.098899 kernel: ACPI: bus type drm_connector registered Mar 7 01:06:24.098915 kernel: fuse: init (API version 7.39) Mar 7 01:06:24.098926 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:06:24.098938 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:06:24.098950 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:06:24.098961 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:06:24.098972 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:06:24.098987 systemd[1]: Stopped verity-setup.service. Mar 7 01:06:24.099021 systemd-journald[1123]: Collecting audit messages is disabled. Mar 7 01:06:24.099042 systemd-journald[1123]: Journal started Mar 7 01:06:24.099066 systemd-journald[1123]: Runtime Journal (/run/log/journal/f895dc2d8cb8454a8fffb06d19843ef8) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:06:23.708984 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:06:23.725634 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:06:23.726356 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:06:24.106286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:06:24.118471 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:06:24.119518 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:06:24.120457 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:06:24.122264 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:06:24.123267 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:06:24.124214 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:06:24.125195 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:06:24.126433 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:06:24.127688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:06:24.128887 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:06:24.129331 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:06:24.130654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:06:24.130886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:06:24.132138 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:06:24.132679 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:06:24.133918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:06:24.134189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:06:24.135390 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:06:24.135627 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:06:24.137079 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:06:24.137366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:06:24.138759 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:06:24.140116 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:06:24.164164 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:06:24.180989 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:06:24.189521 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:06:24.195774 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:06:24.197026 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:06:24.197113 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:06:24.199085 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:06:24.207057 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:06:24.215669 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:06:24.217054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:06:24.219446 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:06:24.225379 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:06:24.226598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:06:24.230390 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:06:24.231822 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:06:24.234811 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:06:24.247466 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:06:24.251879 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:06:24.262294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:06:24.264702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:06:24.266427 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:06:24.269594 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:06:24.271294 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:06:24.283505 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:06:24.292444 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:06:24.300179 systemd-journald[1123]: Time spent on flushing to /var/log/journal/f895dc2d8cb8454a8fffb06d19843ef8 is 69.231ms for 981 entries. Mar 7 01:06:24.300179 systemd-journald[1123]: System Journal (/var/log/journal/f895dc2d8cb8454a8fffb06d19843ef8) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:06:24.387202 systemd-journald[1123]: Received client request to flush runtime journal. Mar 7 01:06:24.387261 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:06:24.387278 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:06:24.387292 kernel: loop1: detected capacity change from 0 to 8 Mar 7 01:06:24.303398 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:06:24.304607 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Mar 7 01:06:24.304621 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Mar 7 01:06:24.322579 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:06:24.335421 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:06:24.356984 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:06:24.368641 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:06:24.371368 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:06:24.373637 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:06:24.389502 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:06:24.425843 kernel: loop2: detected capacity change from 0 to 140768 Mar 7 01:06:24.436772 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:06:24.446220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:06:24.469122 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 7 01:06:24.469485 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 7 01:06:24.476361 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:06:24.485452 kernel: loop3: detected capacity change from 0 to 228704 Mar 7 01:06:24.529752 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:06:24.554839 kernel: loop5: detected capacity change from 0 to 8 Mar 7 01:06:24.560294 kernel: loop6: detected capacity change from 0 to 140768 Mar 7 01:06:24.584281 kernel: loop7: detected capacity change from 0 to 228704 Mar 7 01:06:24.609985 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 7 01:06:24.612857 (sd-merge)[1189]: Merged extensions into '/usr'. Mar 7 01:06:24.619030 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:06:24.619103 systemd[1]: Reloading... Mar 7 01:06:24.732267 zram_generator::config[1214]: No configuration found. Mar 7 01:06:24.832999 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:06:24.876156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:06:24.924356 systemd[1]: Reloading finished in 304 ms. Mar 7 01:06:24.960476 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:06:24.962166 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:06:24.963506 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:06:24.972366 systemd[1]: Starting ensure-sysext.service... Mar 7 01:06:24.975470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:06:24.987387 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:06:24.992113 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:06:24.992170 systemd[1]: Reloading... Mar 7 01:06:24.997405 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:06:24.997797 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:06:24.998781 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:06:24.999060 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 7 01:06:24.999147 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 7 01:06:25.011474 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:06:25.012106 systemd-tmpfiles[1260]: Skipping /boot Mar 7 01:06:25.019860 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Mar 7 01:06:25.034911 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:06:25.034927 systemd-tmpfiles[1260]: Skipping /boot Mar 7 01:06:25.075177 zram_generator::config[1295]: No configuration found. Mar 7 01:06:25.207303 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1315) Mar 7 01:06:25.305407 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:06:25.305797 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:06:25.310916 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:06:25.316841 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:06:25.317013 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:06:25.315104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:06:25.329254 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:06:25.401254 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:06:25.405727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:06:25.407686 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:06:25.408128 systemd[1]: Reloading finished in 415 ms. Mar 7 01:06:25.420263 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:06:25.428035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:06:25.429401 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:06:25.450200 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:06:25.458370 systemd[1]: Finished ensure-sysext.service. Mar 7 01:06:25.480093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:06:25.486372 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:06:25.489381 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:06:25.490306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:06:25.491931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:06:25.495517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:06:25.498935 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:06:25.502379 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:06:25.505481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:06:25.507203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:06:25.510599 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:06:25.514853 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:06:25.520488 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:06:25.529922 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:06:25.538397 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:06:25.541465 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:06:25.551719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:06:25.552996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:06:25.554581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:06:25.554814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:06:25.561688 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:06:25.564369 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:06:25.569112 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:06:25.571297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:06:25.593788 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:06:25.595530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:06:25.595965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:06:25.602012 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:06:25.609855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:06:25.617355 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:06:25.617605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:06:25.621131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:06:25.638311 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:06:25.639731 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:06:25.648310 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:06:25.649483 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:06:25.653078 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:06:25.656951 augenrules[1407]: No rules Mar 7 01:06:25.661455 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:06:25.663246 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:06:25.666273 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:06:25.675412 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:06:25.676335 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:06:25.716101 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:06:25.724312 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:06:25.823986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:06:25.832507 systemd-networkd[1382]: lo: Link UP Mar 7 01:06:25.832515 systemd-networkd[1382]: lo: Gained carrier Mar 7 01:06:25.834192 systemd-networkd[1382]: Enumeration completed Mar 7 01:06:25.834307 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:06:25.837652 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:06:25.837708 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:06:25.838618 systemd-networkd[1382]: eth0: Link UP Mar 7 01:06:25.838674 systemd-networkd[1382]: eth0: Gained carrier Mar 7 01:06:25.838722 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:06:25.841504 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:06:25.842486 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:06:25.843361 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:06:25.850560 systemd-resolved[1383]: Positive Trust Anchors: Mar 7 01:06:25.850578 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:06:25.850608 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:06:25.854416 systemd-resolved[1383]: Defaulting to hostname 'linux'. Mar 7 01:06:25.856274 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:06:25.857139 systemd[1]: Reached target network.target - Network. Mar 7 01:06:25.857875 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:06:25.858683 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:06:25.859548 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:06:25.860407 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:06:25.861450 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:06:25.862327 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:06:25.863109 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:06:25.863908 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:06:25.863941 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:06:25.864673 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:06:25.867279 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:06:25.869501 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:06:25.876477 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:06:25.877798 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:06:25.878729 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:06:25.879590 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:06:25.880442 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:06:25.880479 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:06:25.881632 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:06:25.884389 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:06:25.889428 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:06:25.891372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:06:25.896448 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:06:25.898032 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:06:25.905500 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:06:25.908752 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:06:25.912995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:06:25.915686 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:06:25.928896 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:06:25.930687 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:06:25.931115 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:06:25.933067 jq[1435]: false Mar 7 01:06:25.933394 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:06:25.938375 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:06:25.949388 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:06:25.949616 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:06:25.949970 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:06:25.950152 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:06:25.990305 jq[1446]: true Mar 7 01:06:25.992858 dbus-daemon[1434]: [system] SELinux support is enabled Mar 7 01:06:25.997443 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:06:25.999363 extend-filesystems[1436]: Found loop4 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found loop5 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found loop6 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found loop7 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda1 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda2 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda3 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found usr Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda4 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda6 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda7 Mar 7 01:06:25.999363 extend-filesystems[1436]: Found sda9 Mar 7 01:06:25.999363 extend-filesystems[1436]: Checking size of /dev/sda9 Mar 7 01:06:26.052285 extend-filesystems[1436]: Resized partition /dev/sda9 Mar 7 01:06:26.001498 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:06:26.054488 update_engine[1444]: I20260307 01:06:26.028733 1444 main.cc:92] Flatcar Update Engine starting Mar 7 01:06:26.054488 update_engine[1444]: I20260307 01:06:26.048064 1444 update_check_scheduler.cc:74] Next update check in 8m4s Mar 7 01:06:26.058256 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:06:26.001535 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:06:26.004449 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:06:26.063522 tar[1461]: linux-amd64/LICENSE Mar 7 01:06:26.004469 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:06:26.066318 jq[1463]: true Mar 7 01:06:26.067623 tar[1461]: linux-amd64/helm Mar 7 01:06:26.017355 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:06:26.017372 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:06:26.017614 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:06:26.045944 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:06:26.054732 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:06:26.069319 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 7 01:06:26.092402 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:06:26.092429 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:06:26.097427 systemd-logind[1442]: New seat seat0. Mar 7 01:06:26.102702 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:06:26.143519 coreos-metadata[1433]: Mar 07 01:06:26.143 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:06:26.203100 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1284) Mar 7 01:06:26.213095 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:06:26.215890 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:06:26.229435 systemd[1]: Starting sshkeys.service... Mar 7 01:06:26.267645 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:06:26.282373 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:06:26.371310 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:06:26.374423 coreos-metadata[1501]: Mar 07 01:06:26.373 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:06:26.393800 containerd[1457]: time="2026-03-07T01:06:26.393721972Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:06:26.439246 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 7 01:06:26.440083 containerd[1457]: time="2026-03-07T01:06:26.440038435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:06:26.441867 containerd[1457]: time="2026-03-07T01:06:26.441832178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:06:26.441867 containerd[1457]: time="2026-03-07T01:06:26.441865768Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:06:26.441927 containerd[1457]: time="2026-03-07T01:06:26.441881358Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:06:26.450195 containerd[1457]: time="2026-03-07T01:06:26.450122865Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:06:26.450195 containerd[1457]: time="2026-03-07T01:06:26.450166325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:06:26.450363 containerd[1457]: time="2026-03-07T01:06:26.450281665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:06:26.450363 containerd[1457]: time="2026-03-07T01:06:26.450298545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:06:26.450552 containerd[1457]: time="2026-03-07T01:06:26.450528996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:06:26.450619 containerd[1457]: time="2026-03-07T01:06:26.450553966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:06:26.450619 containerd[1457]: time="2026-03-07T01:06:26.450569016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:06:26.450619 containerd[1457]: time="2026-03-07T01:06:26.450579616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:06:26.451332 containerd[1457]: time="2026-03-07T01:06:26.450673186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:06:26.451332 containerd[1457]: time="2026-03-07T01:06:26.450923177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:06:26.451564 extend-filesystems[1472]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 01:06:26.451564 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 7 01:06:26.451564 extend-filesystems[1472]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 7 01:06:26.467268 extend-filesystems[1436]: Resized filesystem in /dev/sda9 Mar 7 01:06:26.454374 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.452159169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.452175359Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.452314769Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.452372779Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.458933663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.458975383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.458991443Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.459010563Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.459024303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.459164043Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.459496874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.459628644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:06:26.471692 containerd[1457]: time="2026-03-07T01:06:26.459644194Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:06:26.454622 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459656254Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459668154Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459680244Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459698384Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459719084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459731644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459744284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459755204Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459765624Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459783514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459795504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459807584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459818834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472046 containerd[1457]: time="2026-03-07T01:06:26.459830084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.462772 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459841084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459852274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459863614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459884574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459898284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459913875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459924785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459935725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459948905Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459965895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459981365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.459991965Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.460056475Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:06:26.472338 containerd[1457]: time="2026-03-07T01:06:26.460073185Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:06:26.472573 containerd[1457]: time="2026-03-07T01:06:26.460082795Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:06:26.472573 containerd[1457]: time="2026-03-07T01:06:26.460093695Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:06:26.472573 containerd[1457]: time="2026-03-07T01:06:26.460161155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472573 containerd[1457]: time="2026-03-07T01:06:26.460173985Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:06:26.472573 containerd[1457]: time="2026-03-07T01:06:26.460183865Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:06:26.472573 containerd[1457]: time="2026-03-07T01:06:26.460193305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.460422836Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.460475346Z" level=info msg="Connect containerd service" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.460504136Z" level=info msg="using legacy CRI server" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.460510176Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.460579016Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.461198607Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.461386407Z" level=info msg="Start subscribing containerd event" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.461624978Z" level=info msg="Start recovering state" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.461678998Z" level=info msg="Start event monitor" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.461700318Z" level=info msg="Start snapshots syncer" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.461708738Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.461716118Z" level=info msg="Start streaming server" Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.462621940Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.462678050Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:06:26.472701 containerd[1457]: time="2026-03-07T01:06:26.462841010Z" level=info msg="containerd successfully booted in 0.072255s" Mar 7 01:06:26.512327 systemd-networkd[1382]: eth0: DHCPv4 address 172.239.198.121/24, gateway 172.239.198.1 acquired from 23.213.15.213 Mar 7 01:06:26.513206 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1382 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:06:26.513749 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Mar 7 01:06:26.525504 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:06:26.634312 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:06:26.634442 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:06:26.635589 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1516 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:06:26.642456 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:06:26.646173 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:06:26.663100 polkitd[1517]: Started polkitd version 121 Mar 7 01:06:26.668115 polkitd[1517]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:06:26.668344 polkitd[1517]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:06:26.669016 polkitd[1517]: Finished loading, compiling and executing 2 rules Mar 7 01:06:26.669481 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:06:26.669755 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:06:26.671507 polkitd[1517]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:06:26.684474 systemd-resolved[1383]: System hostname changed to '172-239-198-121'. Mar 7 01:06:26.684842 systemd-hostnamed[1516]: Hostname set to <172-239-198-121> (transient) Mar 7 01:06:26.686414 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:06:26.696905 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:06:26.725000 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:06:26.725358 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:06:26.735154 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:06:26.750151 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:06:26.759216 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:06:27.976208 systemd-resolved[1383]: Clock change detected. Flushing caches. Mar 7 01:06:27.976486 systemd-timesyncd[1385]: Contacted time server 144.202.66.214:123 (0.flatcar.pool.ntp.org). Mar 7 01:06:27.976978 systemd-timesyncd[1385]: Initial clock synchronization to Sat 2026-03-07 01:06:27.976170 UTC. Mar 7 01:06:27.978618 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:06:27.979580 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:06:28.076292 tar[1461]: linux-amd64/README.md Mar 7 01:06:28.089838 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:06:28.360617 coreos-metadata[1433]: Mar 07 01:06:28.360 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 7 01:06:28.450491 coreos-metadata[1433]: Mar 07 01:06:28.450 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 7 01:06:28.596734 coreos-metadata[1501]: Mar 07 01:06:28.596 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 7 01:06:28.638350 coreos-metadata[1433]: Mar 07 01:06:28.638 INFO Fetch successful Mar 7 01:06:28.638350 coreos-metadata[1433]: Mar 07 01:06:28.638 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 7 01:06:28.696963 coreos-metadata[1501]: Mar 07 01:06:28.696 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 7 01:06:28.829741 coreos-metadata[1501]: Mar 07 01:06:28.829 INFO Fetch successful Mar 7 01:06:28.846175 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:06:28.846749 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:06:28.850151 systemd[1]: Finished sshkeys.service. Mar 7 01:06:28.866399 systemd-networkd[1382]: eth0: Gained IPv6LL Mar 7 01:06:28.868431 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:06:28.870652 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:06:28.878415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:06:28.880466 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:06:28.901068 coreos-metadata[1433]: Mar 07 01:06:28.898 INFO Fetch successful Mar 7 01:06:28.909376 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:06:28.994242 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:06:28.995606 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:06:29.805074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:06:29.806648 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:06:29.808380 systemd[1]: Startup finished in 972ms (kernel) + 8.379s (initrd) + 5.553s (userspace) = 14.905s. Mar 7 01:06:29.813713 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:06:30.381043 kubelet[1588]: E0307 01:06:30.380975 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:06:30.384656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:06:30.384881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:06:30.829211 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:06:30.849535 systemd[1]: Started sshd@0-172.239.198.121:22-68.220.241.50:41804.service - OpenSSH per-connection server daemon (68.220.241.50:41804). Mar 7 01:06:31.011856 sshd[1600]: Accepted publickey for core from 68.220.241.50 port 41804 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:06:31.014051 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:31.023490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:06:31.028618 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:06:31.031877 systemd-logind[1442]: New session 1 of user core. Mar 7 01:06:31.052293 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:06:31.062653 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:06:31.066374 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:06:31.176735 systemd[1604]: Queued start job for default target default.target. Mar 7 01:06:31.184489 systemd[1604]: Created slice app.slice - User Application Slice. Mar 7 01:06:31.184538 systemd[1604]: Reached target paths.target - Paths. Mar 7 01:06:31.184561 systemd[1604]: Reached target timers.target - Timers. Mar 7 01:06:31.188576 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:06:31.199946 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:06:31.200078 systemd[1604]: Reached target sockets.target - Sockets. Mar 7 01:06:31.200093 systemd[1604]: Reached target basic.target - Basic System. Mar 7 01:06:31.200135 systemd[1604]: Reached target default.target - Main User Target. Mar 7 01:06:31.200175 systemd[1604]: Startup finished in 126ms. Mar 7 01:06:31.200300 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:06:31.209403 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:06:31.350854 systemd[1]: Started sshd@1-172.239.198.121:22-68.220.241.50:41814.service - OpenSSH per-connection server daemon (68.220.241.50:41814). Mar 7 01:06:31.501912 sshd[1616]: Accepted publickey for core from 68.220.241.50 port 41814 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:06:31.502543 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:31.508459 systemd-logind[1442]: New session 2 of user core. Mar 7 01:06:31.517431 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:06:31.636014 sshd[1616]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:31.640776 systemd[1]: sshd@1-172.239.198.121:22-68.220.241.50:41814.service: Deactivated successfully. Mar 7 01:06:31.643121 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:06:31.643955 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:06:31.645131 systemd-logind[1442]: Removed session 2. Mar 7 01:06:31.673564 systemd[1]: Started sshd@2-172.239.198.121:22-68.220.241.50:41820.service - OpenSSH per-connection server daemon (68.220.241.50:41820). Mar 7 01:06:31.821819 sshd[1623]: Accepted publickey for core from 68.220.241.50 port 41820 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:06:31.823982 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:31.828909 systemd-logind[1442]: New session 3 of user core. Mar 7 01:06:31.839440 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:06:31.951485 sshd[1623]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:31.955316 systemd[1]: sshd@2-172.239.198.121:22-68.220.241.50:41820.service: Deactivated successfully. Mar 7 01:06:31.957188 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:06:31.957943 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:06:31.959088 systemd-logind[1442]: Removed session 3. Mar 7 01:06:31.991551 systemd[1]: Started sshd@3-172.239.198.121:22-68.220.241.50:41836.service - OpenSSH per-connection server daemon (68.220.241.50:41836). Mar 7 01:06:32.149974 sshd[1630]: Accepted publickey for core from 68.220.241.50 port 41836 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:06:32.151711 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:32.156596 systemd-logind[1442]: New session 4 of user core. Mar 7 01:06:32.167420 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:06:32.289635 sshd[1630]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:32.293065 systemd[1]: sshd@3-172.239.198.121:22-68.220.241.50:41836.service: Deactivated successfully. Mar 7 01:06:32.295538 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:06:32.296991 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:06:32.298120 systemd-logind[1442]: Removed session 4. Mar 7 01:06:32.330660 systemd[1]: Started sshd@4-172.239.198.121:22-68.220.241.50:52398.service - OpenSSH per-connection server daemon (68.220.241.50:52398). Mar 7 01:06:32.480896 sshd[1637]: Accepted publickey for core from 68.220.241.50 port 52398 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:06:32.482549 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:32.487204 systemd-logind[1442]: New session 5 of user core. Mar 7 01:06:32.493401 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:06:32.597454 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:06:32.597845 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:06:32.612362 sudo[1640]: pam_unix(sudo:session): session closed for user root Mar 7 01:06:32.634252 sshd[1637]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:32.637511 systemd[1]: sshd@4-172.239.198.121:22-68.220.241.50:52398.service: Deactivated successfully. Mar 7 01:06:32.640295 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:06:32.641820 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:06:32.642986 systemd-logind[1442]: Removed session 5. Mar 7 01:06:32.667495 systemd[1]: Started sshd@5-172.239.198.121:22-68.220.241.50:52406.service - OpenSSH per-connection server daemon (68.220.241.50:52406). Mar 7 01:06:32.817300 sshd[1645]: Accepted publickey for core from 68.220.241.50 port 52406 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:06:32.818900 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:32.823643 systemd-logind[1442]: New session 6 of user core. Mar 7 01:06:32.832393 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:06:32.929359 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:06:32.929829 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:06:32.934399 sudo[1649]: pam_unix(sudo:session): session closed for user root Mar 7 01:06:32.941137 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:06:32.941570 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:06:32.957471 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:06:32.963757 auditctl[1652]: No rules Mar 7 01:06:32.964177 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:06:32.964405 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:06:32.969702 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:06:32.999925 augenrules[1670]: No rules Mar 7 01:06:33.001908 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:06:33.003001 sudo[1648]: pam_unix(sudo:session): session closed for user root Mar 7 01:06:33.024916 sshd[1645]: pam_unix(sshd:session): session closed for user core Mar 7 01:06:33.028545 systemd[1]: sshd@5-172.239.198.121:22-68.220.241.50:52406.service: Deactivated successfully. Mar 7 01:06:33.030412 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:06:33.031677 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:06:33.033026 systemd-logind[1442]: Removed session 6. Mar 7 01:06:33.062359 systemd[1]: Started sshd@6-172.239.198.121:22-68.220.241.50:52422.service - OpenSSH per-connection server daemon (68.220.241.50:52422). Mar 7 01:06:33.237211 sshd[1678]: Accepted publickey for core from 68.220.241.50 port 52422 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:06:33.238200 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:06:33.243524 systemd-logind[1442]: New session 7 of user core. Mar 7 01:06:33.249402 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:06:33.351530 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:06:33.352115 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:06:33.617561 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:06:33.617757 (dockerd)[1697]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:06:33.870416 dockerd[1697]: time="2026-03-07T01:06:33.869772565Z" level=info msg="Starting up" Mar 7 01:06:33.989917 dockerd[1697]: time="2026-03-07T01:06:33.989748275Z" level=info msg="Loading containers: start." Mar 7 01:06:34.095307 kernel: Initializing XFRM netlink socket Mar 7 01:06:34.185883 systemd-networkd[1382]: docker0: Link UP Mar 7 01:06:34.199715 dockerd[1697]: time="2026-03-07T01:06:34.199671635Z" level=info msg="Loading containers: done." Mar 7 01:06:34.212482 dockerd[1697]: time="2026-03-07T01:06:34.212435861Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:06:34.212683 dockerd[1697]: time="2026-03-07T01:06:34.212521891Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:06:34.212683 dockerd[1697]: time="2026-03-07T01:06:34.212629851Z" level=info msg="Daemon has completed initialization" Mar 7 01:06:34.246829 dockerd[1697]: time="2026-03-07T01:06:34.246758099Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:06:34.247369 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:06:34.758154 containerd[1457]: time="2026-03-07T01:06:34.758101472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:06:34.941590 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1490775839-merged.mount: Deactivated successfully. Mar 7 01:06:35.316474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140677960.mount: Deactivated successfully. Mar 7 01:06:36.491746 containerd[1457]: time="2026-03-07T01:06:36.491701778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:36.492700 containerd[1457]: time="2026-03-07T01:06:36.492662860Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116192" Mar 7 01:06:36.493298 containerd[1457]: time="2026-03-07T01:06:36.493245651Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:36.497185 containerd[1457]: time="2026-03-07T01:06:36.495863926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:36.497185 containerd[1457]: time="2026-03-07T01:06:36.497036739Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.738892147s" Mar 7 01:06:36.497185 containerd[1457]: time="2026-03-07T01:06:36.497065429Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:06:36.498057 containerd[1457]: time="2026-03-07T01:06:36.498023031Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:06:37.765626 containerd[1457]: time="2026-03-07T01:06:37.765556285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:37.766746 containerd[1457]: time="2026-03-07T01:06:37.766701957Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021816" Mar 7 01:06:37.767589 containerd[1457]: time="2026-03-07T01:06:37.767526659Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:37.775169 containerd[1457]: time="2026-03-07T01:06:37.774921804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:37.775954 containerd[1457]: time="2026-03-07T01:06:37.775404315Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.277345634s" Mar 7 01:06:37.775954 containerd[1457]: time="2026-03-07T01:06:37.775436975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:06:37.776812 containerd[1457]: time="2026-03-07T01:06:37.776775158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:06:38.896235 containerd[1457]: time="2026-03-07T01:06:38.896163376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:38.899318 containerd[1457]: time="2026-03-07T01:06:38.897914269Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162752" Mar 7 01:06:38.899318 containerd[1457]: time="2026-03-07T01:06:38.898779141Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:38.901243 containerd[1457]: time="2026-03-07T01:06:38.900808675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:38.902421 containerd[1457]: time="2026-03-07T01:06:38.901863187Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.125046799s" Mar 7 01:06:38.902421 containerd[1457]: time="2026-03-07T01:06:38.901891577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:06:38.907498 containerd[1457]: time="2026-03-07T01:06:38.907447228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:06:39.872602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045802146.mount: Deactivated successfully. Mar 7 01:06:40.282618 containerd[1457]: time="2026-03-07T01:06:40.282557028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:40.283439 containerd[1457]: time="2026-03-07T01:06:40.283341940Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828653" Mar 7 01:06:40.283900 containerd[1457]: time="2026-03-07T01:06:40.283861181Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:40.286817 containerd[1457]: time="2026-03-07T01:06:40.286772746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:40.287420 containerd[1457]: time="2026-03-07T01:06:40.287373328Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.37988128s" Mar 7 01:06:40.287420 containerd[1457]: time="2026-03-07T01:06:40.287411788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:06:40.291111 containerd[1457]: time="2026-03-07T01:06:40.291069495Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:06:40.426778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:06:40.436909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:06:40.609464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:06:40.611119 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:06:40.663619 kubelet[1917]: E0307 01:06:40.663539 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:06:40.669907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:06:40.670135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:06:40.826567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1549348797.mount: Deactivated successfully. Mar 7 01:06:41.576984 containerd[1457]: time="2026-03-07T01:06:41.576909286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:41.578658 containerd[1457]: time="2026-03-07T01:06:41.578387509Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Mar 7 01:06:41.580948 containerd[1457]: time="2026-03-07T01:06:41.579102280Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:41.582000 containerd[1457]: time="2026-03-07T01:06:41.581960756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:41.584118 containerd[1457]: time="2026-03-07T01:06:41.584053460Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.292950375s" Mar 7 01:06:41.584118 containerd[1457]: time="2026-03-07T01:06:41.584108421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:06:41.585024 containerd[1457]: time="2026-03-07T01:06:41.584834192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:06:42.056639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256317968.mount: Deactivated successfully. Mar 7 01:06:42.061100 containerd[1457]: time="2026-03-07T01:06:42.060501003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:42.061356 containerd[1457]: time="2026-03-07T01:06:42.061315315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Mar 7 01:06:42.061859 containerd[1457]: time="2026-03-07T01:06:42.061823156Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:42.063644 containerd[1457]: time="2026-03-07T01:06:42.063596709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:42.064936 containerd[1457]: time="2026-03-07T01:06:42.064503861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 479.637939ms" Mar 7 01:06:42.064936 containerd[1457]: time="2026-03-07T01:06:42.064537751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:06:42.065450 containerd[1457]: time="2026-03-07T01:06:42.065415793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:06:42.648720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736787940.mount: Deactivated successfully. Mar 7 01:06:43.448077 containerd[1457]: time="2026-03-07T01:06:43.447726137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:43.449442 containerd[1457]: time="2026-03-07T01:06:43.448964869Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718846" Mar 7 01:06:43.449912 containerd[1457]: time="2026-03-07T01:06:43.449873591Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:43.452496 containerd[1457]: time="2026-03-07T01:06:43.452441446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:06:43.453918 containerd[1457]: time="2026-03-07T01:06:43.453597949Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.388155696s" Mar 7 01:06:43.453918 containerd[1457]: time="2026-03-07T01:06:43.453630369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:06:46.547284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:06:46.556494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:06:46.586365 systemd[1]: Reloading requested from client PID 2073 ('systemctl') (unit session-7.scope)... Mar 7 01:06:46.586386 systemd[1]: Reloading... Mar 7 01:06:46.727414 zram_generator::config[2114]: No configuration found. Mar 7 01:06:46.854165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:06:46.929771 systemd[1]: Reloading finished in 342 ms. Mar 7 01:06:46.991772 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:06:46.991881 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:06:46.992244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:06:46.998566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:06:47.153315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:06:47.157844 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:06:47.196023 kubelet[2168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:06:47.196023 kubelet[2168]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:06:47.196023 kubelet[2168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:06:47.196746 kubelet[2168]: I0307 01:06:47.196677 2168 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:06:47.791206 kubelet[2168]: I0307 01:06:47.791152 2168 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:06:47.791206 kubelet[2168]: I0307 01:06:47.791183 2168 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:06:47.791527 kubelet[2168]: I0307 01:06:47.791425 2168 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:06:47.825481 kubelet[2168]: E0307 01:06:47.825169 2168 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.198.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:06:47.825686 kubelet[2168]: I0307 01:06:47.825653 2168 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:06:47.835341 kubelet[2168]: E0307 01:06:47.835286 2168 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:06:47.835341 kubelet[2168]: I0307 01:06:47.835325 2168 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:06:47.839586 kubelet[2168]: I0307 01:06:47.839548 2168 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:06:47.840495 kubelet[2168]: I0307 01:06:47.840445 2168 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:06:47.840664 kubelet[2168]: I0307 01:06:47.840481 2168 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-198-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:06:47.840664 kubelet[2168]: I0307 01:06:47.840664 2168 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:06:47.840791 kubelet[2168]: I0307 01:06:47.840674 2168 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:06:47.840850 kubelet[2168]: I0307 01:06:47.840826 2168 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:06:47.845947 kubelet[2168]: I0307 01:06:47.845786 2168 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:06:47.845947 kubelet[2168]: I0307 01:06:47.845835 2168 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:06:47.845947 kubelet[2168]: I0307 01:06:47.845864 2168 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:06:47.848408 kubelet[2168]: I0307 01:06:47.847894 2168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:06:47.851229 kubelet[2168]: E0307 01:06:47.851202 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.198.121:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-198-121&limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:06:47.851697 kubelet[2168]: I0307 01:06:47.851681 2168 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:06:47.852225 kubelet[2168]: I0307 01:06:47.852209 2168 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:06:47.854613 kubelet[2168]: W0307 01:06:47.854568 2168 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:06:47.863369 kubelet[2168]: E0307 01:06:47.861376 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.198.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:06:47.863369 kubelet[2168]: I0307 01:06:47.861497 2168 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:06:47.863369 kubelet[2168]: I0307 01:06:47.861540 2168 server.go:1289] "Started kubelet" Mar 7 01:06:47.869560 kubelet[2168]: I0307 01:06:47.869516 2168 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:06:47.870659 kubelet[2168]: I0307 01:06:47.870644 2168 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:06:47.872899 kubelet[2168]: I0307 01:06:47.872081 2168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:06:47.875178 kubelet[2168]: I0307 01:06:47.874645 2168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:06:47.875178 kubelet[2168]: I0307 01:06:47.874873 2168 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:06:47.876174 kubelet[2168]: E0307 01:06:47.875046 2168 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.198.121:6443/api/v1/namespaces/default/events\": dial tcp 172.239.198.121:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-198-121.189a69affa325cc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-198-121,UID:172-239-198-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-198-121,},FirstTimestamp:2026-03-07 01:06:47.861509312 +0000 UTC m=+0.698588677,LastTimestamp:2026-03-07 01:06:47.861509312 +0000 UTC m=+0.698588677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-198-121,}" Mar 7 01:06:47.877096 kubelet[2168]: I0307 01:06:47.877080 2168 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:06:47.881812 kubelet[2168]: E0307 01:06:47.881796 2168 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-198-121\" not found" Mar 7 01:06:47.881909 kubelet[2168]: I0307 01:06:47.881899 2168 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:06:47.882139 kubelet[2168]: I0307 01:06:47.882124 2168 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:06:47.882229 kubelet[2168]: I0307 01:06:47.882219 2168 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:06:47.882800 kubelet[2168]: E0307 01:06:47.882779 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.198.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:06:47.883228 kubelet[2168]: E0307 01:06:47.883210 2168 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:06:47.883476 kubelet[2168]: I0307 01:06:47.883459 2168 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:06:47.883608 kubelet[2168]: I0307 01:06:47.883592 2168 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:06:47.883908 kubelet[2168]: E0307 01:06:47.883868 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.198.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-198-121?timeout=10s\": dial tcp 172.239.198.121:6443: connect: connection refused" interval="200ms" Mar 7 01:06:47.885041 kubelet[2168]: I0307 01:06:47.885010 2168 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:06:47.901099 kubelet[2168]: I0307 01:06:47.901066 2168 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:06:47.901099 kubelet[2168]: I0307 01:06:47.901084 2168 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:06:47.901099 kubelet[2168]: I0307 01:06:47.901102 2168 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:06:47.906545 kubelet[2168]: I0307 01:06:47.906530 2168 policy_none.go:49] "None policy: Start" Mar 7 01:06:47.906874 kubelet[2168]: I0307 01:06:47.906673 2168 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:06:47.906874 kubelet[2168]: I0307 01:06:47.906692 2168 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:06:47.907047 kubelet[2168]: I0307 01:06:47.906508 2168 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:06:47.912589 kubelet[2168]: I0307 01:06:47.912247 2168 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:06:47.912589 kubelet[2168]: I0307 01:06:47.912289 2168 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:06:47.912589 kubelet[2168]: I0307 01:06:47.912331 2168 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:06:47.912589 kubelet[2168]: I0307 01:06:47.912341 2168 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:06:47.912589 kubelet[2168]: E0307 01:06:47.912406 2168 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:06:47.915794 kubelet[2168]: E0307 01:06:47.915772 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.198.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:06:47.920961 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:06:47.934582 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:06:47.937817 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:06:47.948302 kubelet[2168]: E0307 01:06:47.948084 2168 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:06:47.948354 kubelet[2168]: I0307 01:06:47.948337 2168 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:06:47.948385 kubelet[2168]: I0307 01:06:47.948349 2168 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:06:47.948606 kubelet[2168]: I0307 01:06:47.948580 2168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:06:47.950681 kubelet[2168]: E0307 01:06:47.950453 2168 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:06:47.950681 kubelet[2168]: E0307 01:06:47.950500 2168 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-198-121\" not found" Mar 7 01:06:48.026594 systemd[1]: Created slice kubepods-burstable-podcc045a6c948bf831983a8cd50cc32313.slice - libcontainer container kubepods-burstable-podcc045a6c948bf831983a8cd50cc32313.slice. Mar 7 01:06:48.051343 kubelet[2168]: I0307 01:06:48.051206 2168 kubelet_node_status.go:75] "Attempting to register node" node="172-239-198-121" Mar 7 01:06:48.051862 kubelet[2168]: E0307 01:06:48.051804 2168 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.198.121:6443/api/v1/nodes\": dial tcp 172.239.198.121:6443: connect: connection refused" node="172-239-198-121" Mar 7 01:06:48.054834 kubelet[2168]: E0307 01:06:48.054536 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:48.061754 systemd[1]: Created slice kubepods-burstable-pod33f4cb86d9f0dcc3daf9d01bdcbd6998.slice - libcontainer container kubepods-burstable-pod33f4cb86d9f0dcc3daf9d01bdcbd6998.slice. Mar 7 01:06:48.065219 kubelet[2168]: E0307 01:06:48.065088 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:48.068731 systemd[1]: Created slice kubepods-burstable-pod9f0c87f48b81d468a3de35dfc1a01901.slice - libcontainer container kubepods-burstable-pod9f0c87f48b81d468a3de35dfc1a01901.slice. Mar 7 01:06:48.073320 kubelet[2168]: E0307 01:06:48.073110 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:48.083684 kubelet[2168]: I0307 01:06:48.083636 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc045a6c948bf831983a8cd50cc32313-kubeconfig\") pod \"kube-scheduler-172-239-198-121\" (UID: \"cc045a6c948bf831983a8cd50cc32313\") " pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:48.083684 kubelet[2168]: I0307 01:06:48.083688 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33f4cb86d9f0dcc3daf9d01bdcbd6998-ca-certs\") pod \"kube-apiserver-172-239-198-121\" (UID: \"33f4cb86d9f0dcc3daf9d01bdcbd6998\") " pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:48.083684 kubelet[2168]: I0307 01:06:48.083709 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33f4cb86d9f0dcc3daf9d01bdcbd6998-k8s-certs\") pod \"kube-apiserver-172-239-198-121\" (UID: \"33f4cb86d9f0dcc3daf9d01bdcbd6998\") " pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:48.083684 kubelet[2168]: I0307 01:06:48.083761 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-k8s-certs\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:48.083684 kubelet[2168]: I0307 01:06:48.083808 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:48.084243 kubelet[2168]: I0307 01:06:48.083854 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33f4cb86d9f0dcc3daf9d01bdcbd6998-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-198-121\" (UID: \"33f4cb86d9f0dcc3daf9d01bdcbd6998\") " pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:48.084243 kubelet[2168]: I0307 01:06:48.083899 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-ca-certs\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:48.084243 kubelet[2168]: I0307 01:06:48.083928 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-flexvolume-dir\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:48.084243 kubelet[2168]: I0307 01:06:48.083954 2168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-kubeconfig\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:48.084243 kubelet[2168]: E0307 01:06:48.084227 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.198.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-198-121?timeout=10s\": dial tcp 172.239.198.121:6443: connect: connection refused" interval="400ms" Mar 7 01:06:48.254310 kubelet[2168]: I0307 01:06:48.254241 2168 kubelet_node_status.go:75] "Attempting to register node" node="172-239-198-121" Mar 7 01:06:48.254900 kubelet[2168]: E0307 01:06:48.254554 2168 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.198.121:6443/api/v1/nodes\": dial tcp 172.239.198.121:6443: connect: connection refused" node="172-239-198-121" Mar 7 01:06:48.356028 kubelet[2168]: E0307 01:06:48.355853 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:48.357191 containerd[1457]: time="2026-03-07T01:06:48.356696553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-198-121,Uid:cc045a6c948bf831983a8cd50cc32313,Namespace:kube-system,Attempt:0,}" Mar 7 01:06:48.366341 kubelet[2168]: E0307 01:06:48.366056 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:48.367312 containerd[1457]: time="2026-03-07T01:06:48.366952503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-198-121,Uid:33f4cb86d9f0dcc3daf9d01bdcbd6998,Namespace:kube-system,Attempt:0,}" Mar 7 01:06:48.374178 kubelet[2168]: E0307 01:06:48.374152 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:48.374644 containerd[1457]: time="2026-03-07T01:06:48.374612618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-198-121,Uid:9f0c87f48b81d468a3de35dfc1a01901,Namespace:kube-system,Attempt:0,}" Mar 7 01:06:48.484996 kubelet[2168]: E0307 01:06:48.484934 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.198.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-198-121?timeout=10s\": dial tcp 172.239.198.121:6443: connect: connection refused" interval="800ms" Mar 7 01:06:48.656388 kubelet[2168]: I0307 01:06:48.656277 2168 kubelet_node_status.go:75] "Attempting to register node" node="172-239-198-121" Mar 7 01:06:48.656631 kubelet[2168]: E0307 01:06:48.656601 2168 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.198.121:6443/api/v1/nodes\": dial tcp 172.239.198.121:6443: connect: connection refused" node="172-239-198-121" Mar 7 01:06:48.725233 kubelet[2168]: E0307 01:06:48.725151 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.198.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:06:48.850768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373622529.mount: Deactivated successfully. Mar 7 01:06:48.857020 containerd[1457]: time="2026-03-07T01:06:48.856952573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:06:48.858725 containerd[1457]: time="2026-03-07T01:06:48.858678396Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:06:48.859921 containerd[1457]: time="2026-03-07T01:06:48.859864519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:06:48.860812 containerd[1457]: time="2026-03-07T01:06:48.860711570Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:06:48.862316 containerd[1457]: time="2026-03-07T01:06:48.861693122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Mar 7 01:06:48.862316 containerd[1457]: time="2026-03-07T01:06:48.862169463Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:06:48.862955 containerd[1457]: time="2026-03-07T01:06:48.862928055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:06:48.865582 containerd[1457]: time="2026-03-07T01:06:48.865554240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:06:48.867136 containerd[1457]: time="2026-03-07T01:06:48.867112473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.33789ms" Mar 7 01:06:48.868397 containerd[1457]: time="2026-03-07T01:06:48.868356146Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.308933ms" Mar 7 01:06:48.869716 containerd[1457]: time="2026-03-07T01:06:48.869691708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.024509ms" Mar 7 01:06:48.889307 kubelet[2168]: E0307 01:06:48.888576 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.198.121:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-198-121&limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:06:48.985168 containerd[1457]: time="2026-03-07T01:06:48.984582418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:48.985168 containerd[1457]: time="2026-03-07T01:06:48.984728338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:48.985168 containerd[1457]: time="2026-03-07T01:06:48.984748298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:48.987349 containerd[1457]: time="2026-03-07T01:06:48.985462060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:48.988404 containerd[1457]: time="2026-03-07T01:06:48.988340216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:48.988462 containerd[1457]: time="2026-03-07T01:06:48.988426566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:48.988524 containerd[1457]: time="2026-03-07T01:06:48.988490016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:48.988719 containerd[1457]: time="2026-03-07T01:06:48.988684376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:48.993999 containerd[1457]: time="2026-03-07T01:06:48.993813547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:06:48.993999 containerd[1457]: time="2026-03-07T01:06:48.993847137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:06:48.993999 containerd[1457]: time="2026-03-07T01:06:48.993857537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:48.993999 containerd[1457]: time="2026-03-07T01:06:48.993922867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:06:49.011307 systemd[1]: Started cri-containerd-2dfab521b2b9d687110e934056853c7837bc96727f0ab220489a463a39b79569.scope - libcontainer container 2dfab521b2b9d687110e934056853c7837bc96727f0ab220489a463a39b79569. Mar 7 01:06:49.016417 systemd[1]: Started cri-containerd-0d2402fd022d1ef08318049f9828ddaba65a18e70753a7ec293362a567aca14d.scope - libcontainer container 0d2402fd022d1ef08318049f9828ddaba65a18e70753a7ec293362a567aca14d. Mar 7 01:06:49.041431 systemd[1]: Started cri-containerd-29ad5426b4d2f3dbded4f6599fd64925a99e7be12c50bd187beb4827af309839.scope - libcontainer container 29ad5426b4d2f3dbded4f6599fd64925a99e7be12c50bd187beb4827af309839. Mar 7 01:06:49.078730 kubelet[2168]: E0307 01:06:49.076524 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.198.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:06:49.101885 containerd[1457]: time="2026-03-07T01:06:49.100824921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-198-121,Uid:9f0c87f48b81d468a3de35dfc1a01901,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dfab521b2b9d687110e934056853c7837bc96727f0ab220489a463a39b79569\"" Mar 7 01:06:49.101968 kubelet[2168]: E0307 01:06:49.101639 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:49.106162 containerd[1457]: time="2026-03-07T01:06:49.106029521Z" level=info msg="CreateContainer within sandbox \"2dfab521b2b9d687110e934056853c7837bc96727f0ab220489a463a39b79569\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:06:49.110107 containerd[1457]: time="2026-03-07T01:06:49.109919519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-198-121,Uid:33f4cb86d9f0dcc3daf9d01bdcbd6998,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2402fd022d1ef08318049f9828ddaba65a18e70753a7ec293362a567aca14d\"" Mar 7 01:06:49.111047 kubelet[2168]: E0307 01:06:49.110923 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:49.117166 containerd[1457]: time="2026-03-07T01:06:49.117132343Z" level=info msg="CreateContainer within sandbox \"0d2402fd022d1ef08318049f9828ddaba65a18e70753a7ec293362a567aca14d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:06:49.129311 containerd[1457]: time="2026-03-07T01:06:49.127170793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-198-121,Uid:cc045a6c948bf831983a8cd50cc32313,Namespace:kube-system,Attempt:0,} returns sandbox id \"29ad5426b4d2f3dbded4f6599fd64925a99e7be12c50bd187beb4827af309839\"" Mar 7 01:06:49.129379 kubelet[2168]: E0307 01:06:49.127819 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:49.131898 containerd[1457]: time="2026-03-07T01:06:49.131875133Z" level=info msg="CreateContainer within sandbox \"29ad5426b4d2f3dbded4f6599fd64925a99e7be12c50bd187beb4827af309839\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:06:49.133705 containerd[1457]: time="2026-03-07T01:06:49.133665106Z" level=info msg="CreateContainer within sandbox \"2dfab521b2b9d687110e934056853c7837bc96727f0ab220489a463a39b79569\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cbfbad0cfb381dd54c5ff42c2f32bb264b2a6a1d0209ee1581dfb1c1e2b48275\"" Mar 7 01:06:49.134485 containerd[1457]: time="2026-03-07T01:06:49.134452398Z" level=info msg="StartContainer for \"cbfbad0cfb381dd54c5ff42c2f32bb264b2a6a1d0209ee1581dfb1c1e2b48275\"" Mar 7 01:06:49.140888 containerd[1457]: time="2026-03-07T01:06:49.140800090Z" level=info msg="CreateContainer within sandbox \"0d2402fd022d1ef08318049f9828ddaba65a18e70753a7ec293362a567aca14d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6e6e571936083f2e08090e919ad7bfd4d34b078ea7722f22eeb0a311e23b24a\"" Mar 7 01:06:49.142677 kubelet[2168]: E0307 01:06:49.142654 2168 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.198.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.198.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:06:49.142989 containerd[1457]: time="2026-03-07T01:06:49.142970135Z" level=info msg="StartContainer for \"a6e6e571936083f2e08090e919ad7bfd4d34b078ea7722f22eeb0a311e23b24a\"" Mar 7 01:06:49.151005 containerd[1457]: time="2026-03-07T01:06:49.150972191Z" level=info msg="CreateContainer within sandbox \"29ad5426b4d2f3dbded4f6599fd64925a99e7be12c50bd187beb4827af309839\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a5b974ee52d0b9898b72953ae6e96a35b8ad2ddc1993ffb4411f42c52a1261f\"" Mar 7 01:06:49.151595 containerd[1457]: time="2026-03-07T01:06:49.151400092Z" level=info msg="StartContainer for \"2a5b974ee52d0b9898b72953ae6e96a35b8ad2ddc1993ffb4411f42c52a1261f\"" Mar 7 01:06:49.176728 systemd[1]: Started cri-containerd-cbfbad0cfb381dd54c5ff42c2f32bb264b2a6a1d0209ee1581dfb1c1e2b48275.scope - libcontainer container cbfbad0cfb381dd54c5ff42c2f32bb264b2a6a1d0209ee1581dfb1c1e2b48275. Mar 7 01:06:49.187408 systemd[1]: Started cri-containerd-a6e6e571936083f2e08090e919ad7bfd4d34b078ea7722f22eeb0a311e23b24a.scope - libcontainer container a6e6e571936083f2e08090e919ad7bfd4d34b078ea7722f22eeb0a311e23b24a. Mar 7 01:06:49.191466 systemd[1]: Started cri-containerd-2a5b974ee52d0b9898b72953ae6e96a35b8ad2ddc1993ffb4411f42c52a1261f.scope - libcontainer container 2a5b974ee52d0b9898b72953ae6e96a35b8ad2ddc1993ffb4411f42c52a1261f. Mar 7 01:06:49.277159 containerd[1457]: time="2026-03-07T01:06:49.276876003Z" level=info msg="StartContainer for \"a6e6e571936083f2e08090e919ad7bfd4d34b078ea7722f22eeb0a311e23b24a\" returns successfully" Mar 7 01:06:49.277159 containerd[1457]: time="2026-03-07T01:06:49.277022413Z" level=info msg="StartContainer for \"cbfbad0cfb381dd54c5ff42c2f32bb264b2a6a1d0209ee1581dfb1c1e2b48275\" returns successfully" Mar 7 01:06:49.277832 containerd[1457]: time="2026-03-07T01:06:49.277708874Z" level=info msg="StartContainer for \"2a5b974ee52d0b9898b72953ae6e96a35b8ad2ddc1993ffb4411f42c52a1261f\" returns successfully" Mar 7 01:06:49.286468 kubelet[2168]: E0307 01:06:49.286412 2168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.198.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-198-121?timeout=10s\": dial tcp 172.239.198.121:6443: connect: connection refused" interval="1.6s" Mar 7 01:06:49.466314 kubelet[2168]: I0307 01:06:49.464976 2168 kubelet_node_status.go:75] "Attempting to register node" node="172-239-198-121" Mar 7 01:06:49.928924 kubelet[2168]: E0307 01:06:49.928716 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:49.929433 kubelet[2168]: E0307 01:06:49.929075 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:49.930553 kubelet[2168]: E0307 01:06:49.930538 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:49.931028 kubelet[2168]: E0307 01:06:49.931015 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:49.933498 kubelet[2168]: E0307 01:06:49.933342 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:49.933498 kubelet[2168]: E0307 01:06:49.933427 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:50.927255 kubelet[2168]: E0307 01:06:50.927193 2168 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:50.937948 kubelet[2168]: E0307 01:06:50.936543 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:50.937948 kubelet[2168]: E0307 01:06:50.936649 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:50.937948 kubelet[2168]: E0307 01:06:50.937503 2168 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-198-121\" not found" node="172-239-198-121" Mar 7 01:06:50.937948 kubelet[2168]: E0307 01:06:50.937599 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:51.011847 kubelet[2168]: I0307 01:06:51.011228 2168 kubelet_node_status.go:78] "Successfully registered node" node="172-239-198-121" Mar 7 01:06:51.012302 kubelet[2168]: E0307 01:06:51.012119 2168 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-239-198-121\": node \"172-239-198-121\" not found" Mar 7 01:06:51.039856 kubelet[2168]: E0307 01:06:51.039826 2168 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-198-121\" not found" Mar 7 01:06:51.140527 kubelet[2168]: E0307 01:06:51.140483 2168 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-198-121\" not found" Mar 7 01:06:51.284369 kubelet[2168]: I0307 01:06:51.284234 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:51.291167 kubelet[2168]: E0307 01:06:51.291123 2168 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-198-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:51.291167 kubelet[2168]: I0307 01:06:51.291149 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:51.292361 kubelet[2168]: E0307 01:06:51.292326 2168 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-198-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:51.292361 kubelet[2168]: I0307 01:06:51.292354 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:51.293394 kubelet[2168]: E0307 01:06:51.293369 2168 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-198-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:51.862648 kubelet[2168]: I0307 01:06:51.862609 2168 apiserver.go:52] "Watching apiserver" Mar 7 01:06:51.882710 kubelet[2168]: I0307 01:06:51.882598 2168 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:06:51.934466 kubelet[2168]: I0307 01:06:51.934434 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:51.939633 kubelet[2168]: E0307 01:06:51.939526 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:52.936736 kubelet[2168]: E0307 01:06:52.936679 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:53.116326 systemd[1]: Reloading requested from client PID 2448 ('systemctl') (unit session-7.scope)... Mar 7 01:06:53.116347 systemd[1]: Reloading... Mar 7 01:06:53.250397 zram_generator::config[2492]: No configuration found. Mar 7 01:06:53.358714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:06:53.391583 kubelet[2168]: I0307 01:06:53.391240 2168 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:53.398587 kubelet[2168]: E0307 01:06:53.398566 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:53.441774 systemd[1]: Reloading finished in 324 ms. Mar 7 01:06:53.496564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:06:53.514520 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:06:53.514789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:06:53.514853 systemd[1]: kubelet.service: Consumed 1.142s CPU time, 134.1M memory peak, 0B memory swap peak. Mar 7 01:06:53.519477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:06:53.679132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:06:53.687596 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:06:53.730307 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:06:53.730307 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:06:53.730307 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:06:53.730307 kubelet[2539]: I0307 01:06:53.730147 2539 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:06:53.738703 kubelet[2539]: I0307 01:06:53.738683 2539 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:06:53.738783 kubelet[2539]: I0307 01:06:53.738773 2539 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:06:53.738986 kubelet[2539]: I0307 01:06:53.738973 2539 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:06:53.740003 kubelet[2539]: I0307 01:06:53.739986 2539 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:06:53.741997 kubelet[2539]: I0307 01:06:53.741887 2539 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:06:53.745108 kubelet[2539]: E0307 01:06:53.745077 2539 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:06:53.746314 kubelet[2539]: I0307 01:06:53.745406 2539 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:06:53.751843 kubelet[2539]: I0307 01:06:53.749981 2539 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:06:53.751843 kubelet[2539]: I0307 01:06:53.750469 2539 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:06:53.751843 kubelet[2539]: I0307 01:06:53.750489 2539 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-198-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:06:53.751843 kubelet[2539]: I0307 01:06:53.750757 2539 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:06:53.752074 kubelet[2539]: I0307 01:06:53.750796 2539 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:06:53.752074 kubelet[2539]: I0307 01:06:53.750843 2539 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:06:53.752074 kubelet[2539]: I0307 01:06:53.751089 2539 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:06:53.752074 kubelet[2539]: I0307 01:06:53.751127 2539 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:06:53.752074 kubelet[2539]: I0307 01:06:53.751154 2539 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:06:53.752074 kubelet[2539]: I0307 01:06:53.751173 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:06:53.759013 kubelet[2539]: I0307 01:06:53.758979 2539 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:06:53.759478 kubelet[2539]: I0307 01:06:53.759455 2539 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:06:53.762744 kubelet[2539]: I0307 01:06:53.761924 2539 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:06:53.762744 kubelet[2539]: I0307 01:06:53.761956 2539 server.go:1289] "Started kubelet" Mar 7 01:06:53.762744 kubelet[2539]: I0307 01:06:53.762217 2539 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:06:53.762744 kubelet[2539]: I0307 01:06:53.762641 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:06:53.763009 kubelet[2539]: I0307 01:06:53.762980 2539 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:06:53.765719 kubelet[2539]: I0307 01:06:53.763762 2539 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:06:53.767751 kubelet[2539]: I0307 01:06:53.766961 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:06:53.772221 kubelet[2539]: I0307 01:06:53.772139 2539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:06:53.774458 kubelet[2539]: I0307 01:06:53.774427 2539 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:06:53.774695 kubelet[2539]: I0307 01:06:53.774624 2539 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:06:53.774824 kubelet[2539]: I0307 01:06:53.774813 2539 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:06:53.775841 kubelet[2539]: I0307 01:06:53.775811 2539 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:06:53.777895 kubelet[2539]: E0307 01:06:53.777425 2539 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:06:53.784402 kubelet[2539]: I0307 01:06:53.784375 2539 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:06:53.784402 kubelet[2539]: I0307 01:06:53.784394 2539 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:06:53.785117 kubelet[2539]: I0307 01:06:53.784508 2539 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:06:53.785231 kubelet[2539]: I0307 01:06:53.785215 2539 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:06:53.785319 kubelet[2539]: I0307 01:06:53.785308 2539 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:06:53.785383 kubelet[2539]: I0307 01:06:53.785372 2539 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:06:53.785429 kubelet[2539]: I0307 01:06:53.785421 2539 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:06:53.785539 kubelet[2539]: E0307 01:06:53.785511 2539 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:06:53.839897 kubelet[2539]: I0307 01:06:53.839871 2539 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:06:53.840048 kubelet[2539]: I0307 01:06:53.840034 2539 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:06:53.840109 kubelet[2539]: I0307 01:06:53.840100 2539 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:06:53.840299 kubelet[2539]: I0307 01:06:53.840260 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:06:53.840375 kubelet[2539]: I0307 01:06:53.840352 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:06:53.840425 kubelet[2539]: I0307 01:06:53.840417 2539 policy_none.go:49] "None policy: Start" Mar 7 01:06:53.840469 kubelet[2539]: I0307 01:06:53.840461 2539 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:06:53.840522 kubelet[2539]: I0307 01:06:53.840514 2539 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:06:53.840672 kubelet[2539]: I0307 01:06:53.840660 2539 state_mem.go:75] "Updated machine memory state" Mar 7 01:06:53.845586 kubelet[2539]: E0307 01:06:53.845555 2539 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:06:53.845756 kubelet[2539]: I0307 01:06:53.845730 2539 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:06:53.845794 kubelet[2539]: I0307 01:06:53.845747 2539 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:06:53.846737 kubelet[2539]: I0307 01:06:53.846638 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:06:53.850945 kubelet[2539]: E0307 01:06:53.850819 2539 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:06:53.886359 kubelet[2539]: I0307 01:06:53.886326 2539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:53.886831 kubelet[2539]: I0307 01:06:53.886615 2539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:53.887171 kubelet[2539]: I0307 01:06:53.887153 2539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:53.893409 kubelet[2539]: E0307 01:06:53.893373 2539 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-198-121\" already exists" pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:53.895256 kubelet[2539]: E0307 01:06:53.895199 2539 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-198-121\" already exists" pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:53.951723 kubelet[2539]: I0307 01:06:53.951602 2539 kubelet_node_status.go:75] "Attempting to register node" node="172-239-198-121" Mar 7 01:06:53.964393 kubelet[2539]: I0307 01:06:53.964324 2539 kubelet_node_status.go:124] "Node was previously registered" node="172-239-198-121" Mar 7 01:06:53.964393 kubelet[2539]: I0307 01:06:53.964407 2539 kubelet_node_status.go:78] "Successfully registered node" node="172-239-198-121" Mar 7 01:06:53.976500 kubelet[2539]: I0307 01:06:53.976448 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33f4cb86d9f0dcc3daf9d01bdcbd6998-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-198-121\" (UID: \"33f4cb86d9f0dcc3daf9d01bdcbd6998\") " pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:53.976500 kubelet[2539]: I0307 01:06:53.976496 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-flexvolume-dir\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:53.976672 kubelet[2539]: I0307 01:06:53.976525 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-k8s-certs\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:53.976672 kubelet[2539]: I0307 01:06:53.976559 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-kubeconfig\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:53.976672 kubelet[2539]: I0307 01:06:53.976584 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:53.976672 kubelet[2539]: I0307 01:06:53.976610 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc045a6c948bf831983a8cd50cc32313-kubeconfig\") pod \"kube-scheduler-172-239-198-121\" (UID: \"cc045a6c948bf831983a8cd50cc32313\") " pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:53.976672 kubelet[2539]: I0307 01:06:53.976644 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33f4cb86d9f0dcc3daf9d01bdcbd6998-ca-certs\") pod \"kube-apiserver-172-239-198-121\" (UID: \"33f4cb86d9f0dcc3daf9d01bdcbd6998\") " pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:53.976813 kubelet[2539]: I0307 01:06:53.976670 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33f4cb86d9f0dcc3daf9d01bdcbd6998-k8s-certs\") pod \"kube-apiserver-172-239-198-121\" (UID: \"33f4cb86d9f0dcc3daf9d01bdcbd6998\") " pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:53.976813 kubelet[2539]: I0307 01:06:53.976691 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f0c87f48b81d468a3de35dfc1a01901-ca-certs\") pod \"kube-controller-manager-172-239-198-121\" (UID: \"9f0c87f48b81d468a3de35dfc1a01901\") " pod="kube-system/kube-controller-manager-172-239-198-121" Mar 7 01:06:54.194745 kubelet[2539]: E0307 01:06:54.194543 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:54.194745 kubelet[2539]: E0307 01:06:54.194665 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:54.196683 kubelet[2539]: E0307 01:06:54.196662 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:54.754118 kubelet[2539]: I0307 01:06:54.754073 2539 apiserver.go:52] "Watching apiserver" Mar 7 01:06:54.774845 kubelet[2539]: I0307 01:06:54.774760 2539 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:06:54.823294 kubelet[2539]: I0307 01:06:54.823244 2539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:54.824391 kubelet[2539]: E0307 01:06:54.824358 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:54.825021 kubelet[2539]: I0307 01:06:54.824995 2539 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:54.836282 kubelet[2539]: E0307 01:06:54.835805 2539 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-198-121\" already exists" pod="kube-system/kube-apiserver-172-239-198-121" Mar 7 01:06:54.836282 kubelet[2539]: E0307 01:06:54.835952 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:54.836732 kubelet[2539]: E0307 01:06:54.836702 2539 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-198-121\" already exists" pod="kube-system/kube-scheduler-172-239-198-121" Mar 7 01:06:54.836853 kubelet[2539]: E0307 01:06:54.836825 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:54.859958 kubelet[2539]: I0307 01:06:54.859883 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-198-121" podStartSLOduration=1.8598711159999999 podStartE2EDuration="1.859871116s" podCreationTimestamp="2026-03-07 01:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:06:54.854214745 +0000 UTC m=+1.160401992" watchObservedRunningTime="2026-03-07 01:06:54.859871116 +0000 UTC m=+1.166058353" Mar 7 01:06:54.865367 kubelet[2539]: I0307 01:06:54.865036 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-198-121" podStartSLOduration=3.8650250760000002 podStartE2EDuration="3.865025076s" podCreationTimestamp="2026-03-07 01:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:06:54.860363887 +0000 UTC m=+1.166551124" watchObservedRunningTime="2026-03-07 01:06:54.865025076 +0000 UTC m=+1.171212313" Mar 7 01:06:55.827088 kubelet[2539]: E0307 01:06:55.827055 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:55.827681 kubelet[2539]: E0307 01:06:55.827379 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:56.826907 kubelet[2539]: E0307 01:06:56.826843 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:06:57.929469 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:06:58.889090 kubelet[2539]: E0307 01:06:58.889043 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:00.035436 kubelet[2539]: I0307 01:07:00.033208 2539 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:07:00.035832 containerd[1457]: time="2026-03-07T01:07:00.035357185Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:07:00.036557 kubelet[2539]: I0307 01:07:00.036307 2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:07:00.439406 kubelet[2539]: I0307 01:07:00.439353 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-198-121" podStartSLOduration=7.439336862 podStartE2EDuration="7.439336862s" podCreationTimestamp="2026-03-07 01:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:06:54.865125617 +0000 UTC m=+1.171312854" watchObservedRunningTime="2026-03-07 01:07:00.439336862 +0000 UTC m=+6.745524099" Mar 7 01:07:00.454159 systemd[1]: Created slice kubepods-besteffort-pod3a20839f_6a11_44b9_a19b_7621ca8947eb.slice - libcontainer container kubepods-besteffort-pod3a20839f_6a11_44b9_a19b_7621ca8947eb.slice. Mar 7 01:07:00.518380 kubelet[2539]: I0307 01:07:00.518320 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a20839f-6a11-44b9-a19b-7621ca8947eb-kube-proxy\") pod \"kube-proxy-lqk4p\" (UID: \"3a20839f-6a11-44b9-a19b-7621ca8947eb\") " pod="kube-system/kube-proxy-lqk4p" Mar 7 01:07:00.518380 kubelet[2539]: I0307 01:07:00.518385 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a20839f-6a11-44b9-a19b-7621ca8947eb-xtables-lock\") pod \"kube-proxy-lqk4p\" (UID: \"3a20839f-6a11-44b9-a19b-7621ca8947eb\") " pod="kube-system/kube-proxy-lqk4p" Mar 7 01:07:00.518596 kubelet[2539]: I0307 01:07:00.518407 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a20839f-6a11-44b9-a19b-7621ca8947eb-lib-modules\") pod \"kube-proxy-lqk4p\" (UID: \"3a20839f-6a11-44b9-a19b-7621ca8947eb\") " pod="kube-system/kube-proxy-lqk4p" Mar 7 01:07:00.518596 kubelet[2539]: I0307 01:07:00.518431 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t275v\" (UniqueName: \"kubernetes.io/projected/3a20839f-6a11-44b9-a19b-7621ca8947eb-kube-api-access-t275v\") pod \"kube-proxy-lqk4p\" (UID: \"3a20839f-6a11-44b9-a19b-7621ca8947eb\") " pod="kube-system/kube-proxy-lqk4p" Mar 7 01:07:00.623734 kubelet[2539]: E0307 01:07:00.623703 2539 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 01:07:00.623734 kubelet[2539]: E0307 01:07:00.623731 2539 projected.go:194] Error preparing data for projected volume kube-api-access-t275v for pod kube-system/kube-proxy-lqk4p: configmap "kube-root-ca.crt" not found Mar 7 01:07:00.623879 kubelet[2539]: E0307 01:07:00.623778 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a20839f-6a11-44b9-a19b-7621ca8947eb-kube-api-access-t275v podName:3a20839f-6a11-44b9-a19b-7621ca8947eb nodeName:}" failed. No retries permitted until 2026-03-07 01:07:01.123760991 +0000 UTC m=+7.429948238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t275v" (UniqueName: "kubernetes.io/projected/3a20839f-6a11-44b9-a19b-7621ca8947eb-kube-api-access-t275v") pod "kube-proxy-lqk4p" (UID: "3a20839f-6a11-44b9-a19b-7621ca8947eb") : configmap "kube-root-ca.crt" not found Mar 7 01:07:01.278575 systemd[1]: Created slice kubepods-besteffort-pod2b45d32d_eb12_47ea_b6b5_0490cb4c9c2d.slice - libcontainer container kubepods-besteffort-pod2b45d32d_eb12_47ea_b6b5_0490cb4c9c2d.slice. Mar 7 01:07:01.301285 kubelet[2539]: E0307 01:07:01.299238 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:01.323461 kubelet[2539]: I0307 01:07:01.323408 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgbs\" (UniqueName: \"kubernetes.io/projected/2b45d32d-eb12-47ea-b6b5-0490cb4c9c2d-kube-api-access-bpgbs\") pod \"tigera-operator-6bf85f8dd-bjnsn\" (UID: \"2b45d32d-eb12-47ea-b6b5-0490cb4c9c2d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bjnsn" Mar 7 01:07:01.323461 kubelet[2539]: I0307 01:07:01.323441 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b45d32d-eb12-47ea-b6b5-0490cb4c9c2d-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-bjnsn\" (UID: \"2b45d32d-eb12-47ea-b6b5-0490cb4c9c2d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bjnsn" Mar 7 01:07:01.361670 kubelet[2539]: E0307 01:07:01.361628 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:01.362205 containerd[1457]: time="2026-03-07T01:07:01.362142272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lqk4p,Uid:3a20839f-6a11-44b9-a19b-7621ca8947eb,Namespace:kube-system,Attempt:0,}" Mar 7 01:07:01.384002 containerd[1457]: time="2026-03-07T01:07:01.383729954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:01.384002 containerd[1457]: time="2026-03-07T01:07:01.383781525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:01.384002 containerd[1457]: time="2026-03-07T01:07:01.383792065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:01.384002 containerd[1457]: time="2026-03-07T01:07:01.383858816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:01.405630 systemd[1]: run-containerd-runc-k8s.io-85660f0ee026ae9967ce6f0e0189b43d7687197c125ec48300ac5d9ae9e8e521-runc.XlGnlp.mount: Deactivated successfully. Mar 7 01:07:01.416402 systemd[1]: Started cri-containerd-85660f0ee026ae9967ce6f0e0189b43d7687197c125ec48300ac5d9ae9e8e521.scope - libcontainer container 85660f0ee026ae9967ce6f0e0189b43d7687197c125ec48300ac5d9ae9e8e521. Mar 7 01:07:01.448298 containerd[1457]: time="2026-03-07T01:07:01.448235160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lqk4p,Uid:3a20839f-6a11-44b9-a19b-7621ca8947eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"85660f0ee026ae9967ce6f0e0189b43d7687197c125ec48300ac5d9ae9e8e521\"" Mar 7 01:07:01.449107 kubelet[2539]: E0307 01:07:01.448934 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:01.453019 containerd[1457]: time="2026-03-07T01:07:01.452943815Z" level=info msg="CreateContainer within sandbox \"85660f0ee026ae9967ce6f0e0189b43d7687197c125ec48300ac5d9ae9e8e521\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:07:01.466492 containerd[1457]: time="2026-03-07T01:07:01.466459273Z" level=info msg="CreateContainer within sandbox \"85660f0ee026ae9967ce6f0e0189b43d7687197c125ec48300ac5d9ae9e8e521\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b1c0d5dd94f209b58ae00a9b5458cab909d141ad4a78b100bd9c9f3ab19dd47\"" Mar 7 01:07:01.468318 containerd[1457]: time="2026-03-07T01:07:01.467339144Z" level=info msg="StartContainer for \"4b1c0d5dd94f209b58ae00a9b5458cab909d141ad4a78b100bd9c9f3ab19dd47\"" Mar 7 01:07:01.500406 systemd[1]: Started cri-containerd-4b1c0d5dd94f209b58ae00a9b5458cab909d141ad4a78b100bd9c9f3ab19dd47.scope - libcontainer container 4b1c0d5dd94f209b58ae00a9b5458cab909d141ad4a78b100bd9c9f3ab19dd47. Mar 7 01:07:01.527322 containerd[1457]: time="2026-03-07T01:07:01.527026213Z" level=info msg="StartContainer for \"4b1c0d5dd94f209b58ae00a9b5458cab909d141ad4a78b100bd9c9f3ab19dd47\" returns successfully" Mar 7 01:07:01.584879 containerd[1457]: time="2026-03-07T01:07:01.583792428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bjnsn,Uid:2b45d32d-eb12-47ea-b6b5-0490cb4c9c2d,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:07:01.605821 containerd[1457]: time="2026-03-07T01:07:01.605582073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:01.605821 containerd[1457]: time="2026-03-07T01:07:01.605628593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:01.605821 containerd[1457]: time="2026-03-07T01:07:01.605642253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:01.605821 containerd[1457]: time="2026-03-07T01:07:01.605714665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:01.629402 systemd[1]: Started cri-containerd-7815c372faba7d6c3323f21b34c9305d778dc0b3ad78fcea767401728388af40.scope - libcontainer container 7815c372faba7d6c3323f21b34c9305d778dc0b3ad78fcea767401728388af40. Mar 7 01:07:01.672850 containerd[1457]: time="2026-03-07T01:07:01.672811601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bjnsn,Uid:2b45d32d-eb12-47ea-b6b5-0490cb4c9c2d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7815c372faba7d6c3323f21b34c9305d778dc0b3ad78fcea767401728388af40\"" Mar 7 01:07:01.674657 containerd[1457]: time="2026-03-07T01:07:01.674624082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:07:01.837228 kubelet[2539]: E0307 01:07:01.836311 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:01.837228 kubelet[2539]: E0307 01:07:01.836778 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:01.863516 kubelet[2539]: I0307 01:07:01.863454 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lqk4p" podStartSLOduration=1.863442074 podStartE2EDuration="1.863442074s" podCreationTimestamp="2026-03-07 01:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:07:01.847064482 +0000 UTC m=+8.153251719" watchObservedRunningTime="2026-03-07 01:07:01.863442074 +0000 UTC m=+8.169629311" Mar 7 01:07:02.426614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959575259.mount: Deactivated successfully. Mar 7 01:07:03.992706 containerd[1457]: time="2026-03-07T01:07:03.992331400Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:03.994011 containerd[1457]: time="2026-03-07T01:07:03.993543902Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:07:03.995287 containerd[1457]: time="2026-03-07T01:07:03.994326621Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:03.999912 containerd[1457]: time="2026-03-07T01:07:03.999591616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:04.001311 containerd[1457]: time="2026-03-07T01:07:04.001145401Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.326467659s" Mar 7 01:07:04.001361 containerd[1457]: time="2026-03-07T01:07:04.001245013Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:07:04.007222 containerd[1457]: time="2026-03-07T01:07:04.007150842Z" level=info msg="CreateContainer within sandbox \"7815c372faba7d6c3323f21b34c9305d778dc0b3ad78fcea767401728388af40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:07:04.021442 containerd[1457]: time="2026-03-07T01:07:04.021409805Z" level=info msg="CreateContainer within sandbox \"7815c372faba7d6c3323f21b34c9305d778dc0b3ad78fcea767401728388af40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f8f4415ee14a731d733878cd60e08c6e1789ef48a812a2e0c14a9e3e75814134\"" Mar 7 01:07:04.021965 containerd[1457]: time="2026-03-07T01:07:04.021946700Z" level=info msg="StartContainer for \"f8f4415ee14a731d733878cd60e08c6e1789ef48a812a2e0c14a9e3e75814134\"" Mar 7 01:07:04.057602 systemd[1]: Started cri-containerd-f8f4415ee14a731d733878cd60e08c6e1789ef48a812a2e0c14a9e3e75814134.scope - libcontainer container f8f4415ee14a731d733878cd60e08c6e1789ef48a812a2e0c14a9e3e75814134. Mar 7 01:07:04.088455 containerd[1457]: time="2026-03-07T01:07:04.088226733Z" level=info msg="StartContainer for \"f8f4415ee14a731d733878cd60e08c6e1789ef48a812a2e0c14a9e3e75814134\" returns successfully" Mar 7 01:07:05.959292 kubelet[2539]: E0307 01:07:05.958632 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:05.990490 kubelet[2539]: I0307 01:07:05.990420 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-bjnsn" podStartSLOduration=2.661510724 podStartE2EDuration="4.990401369s" podCreationTimestamp="2026-03-07 01:07:01 +0000 UTC" firstStartedPulling="2026-03-07 01:07:01.674038105 +0000 UTC m=+7.980225352" lastFinishedPulling="2026-03-07 01:07:04.00292876 +0000 UTC m=+10.309115997" observedRunningTime="2026-03-07 01:07:04.857197226 +0000 UTC m=+11.163384463" watchObservedRunningTime="2026-03-07 01:07:05.990401369 +0000 UTC m=+12.296588606" Mar 7 01:07:08.893843 kubelet[2539]: E0307 01:07:08.893515 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:09.775801 sudo[1681]: pam_unix(sudo:session): session closed for user root Mar 7 01:07:09.802080 sshd[1678]: pam_unix(sshd:session): session closed for user core Mar 7 01:07:09.808956 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:07:09.810694 systemd[1]: sshd@6-172.239.198.121:22-68.220.241.50:52422.service: Deactivated successfully. Mar 7 01:07:09.815966 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:07:09.821257 systemd[1]: session-7.scope: Consumed 5.242s CPU time, 156.7M memory peak, 0B memory swap peak. Mar 7 01:07:09.823501 systemd-logind[1442]: Removed session 7. Mar 7 01:07:09.857955 kubelet[2539]: E0307 01:07:09.856989 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:12.169815 systemd[1]: Created slice kubepods-besteffort-pod16a1c8a0_2405_4fba_b8d7_efa3b37fd076.slice - libcontainer container kubepods-besteffort-pod16a1c8a0_2405_4fba_b8d7_efa3b37fd076.slice. Mar 7 01:07:12.193098 kubelet[2539]: I0307 01:07:12.193064 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/16a1c8a0-2405-4fba-b8d7-efa3b37fd076-typha-certs\") pod \"calico-typha-6b768f47d6-77jlk\" (UID: \"16a1c8a0-2405-4fba-b8d7-efa3b37fd076\") " pod="calico-system/calico-typha-6b768f47d6-77jlk" Mar 7 01:07:12.193660 kubelet[2539]: I0307 01:07:12.193134 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhfv\" (UniqueName: \"kubernetes.io/projected/16a1c8a0-2405-4fba-b8d7-efa3b37fd076-kube-api-access-qbhfv\") pod \"calico-typha-6b768f47d6-77jlk\" (UID: \"16a1c8a0-2405-4fba-b8d7-efa3b37fd076\") " pod="calico-system/calico-typha-6b768f47d6-77jlk" Mar 7 01:07:12.193660 kubelet[2539]: I0307 01:07:12.193156 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16a1c8a0-2405-4fba-b8d7-efa3b37fd076-tigera-ca-bundle\") pod \"calico-typha-6b768f47d6-77jlk\" (UID: \"16a1c8a0-2405-4fba-b8d7-efa3b37fd076\") " pod="calico-system/calico-typha-6b768f47d6-77jlk" Mar 7 01:07:12.251980 systemd[1]: Created slice kubepods-besteffort-podabf805b0_7331_4d20_a158_cd804b85c9b9.slice - libcontainer container kubepods-besteffort-podabf805b0_7331_4d20_a158_cd804b85c9b9.slice. Mar 7 01:07:12.294316 kubelet[2539]: I0307 01:07:12.294278 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-cni-bin-dir\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294316 kubelet[2539]: I0307 01:07:12.294315 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-cni-log-dir\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294316 kubelet[2539]: I0307 01:07:12.294334 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-policysync\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294519 kubelet[2539]: I0307 01:07:12.294350 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-sys-fs\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294519 kubelet[2539]: I0307 01:07:12.294367 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-var-lib-calico\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294519 kubelet[2539]: I0307 01:07:12.294383 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-cni-net-dir\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294519 kubelet[2539]: I0307 01:07:12.294399 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-lib-modules\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294519 kubelet[2539]: I0307 01:07:12.294428 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-flexvol-driver-host\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294884 kubelet[2539]: I0307 01:07:12.294458 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/abf805b0-7331-4d20-a158-cd804b85c9b9-node-certs\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294884 kubelet[2539]: I0307 01:07:12.294473 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-nodeproc\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294884 kubelet[2539]: I0307 01:07:12.294490 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-var-run-calico\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294884 kubelet[2539]: I0307 01:07:12.294506 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-xtables-lock\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.294884 kubelet[2539]: I0307 01:07:12.294524 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz722\" (UniqueName: \"kubernetes.io/projected/abf805b0-7331-4d20-a158-cd804b85c9b9-kube-api-access-lz722\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.295009 kubelet[2539]: I0307 01:07:12.294550 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abf805b0-7331-4d20-a158-cd804b85c9b9-tigera-ca-bundle\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.295009 kubelet[2539]: I0307 01:07:12.294578 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/abf805b0-7331-4d20-a158-cd804b85c9b9-bpffs\") pod \"calico-node-rszbn\" (UID: \"abf805b0-7331-4d20-a158-cd804b85c9b9\") " pod="calico-system/calico-node-rszbn" Mar 7 01:07:12.342809 kubelet[2539]: E0307 01:07:12.342064 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wh6l9" podUID="276b50de-650e-4285-9112-60e139a99998" Mar 7 01:07:12.396922 kubelet[2539]: I0307 01:07:12.396863 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/276b50de-650e-4285-9112-60e139a99998-registration-dir\") pod \"csi-node-driver-wh6l9\" (UID: \"276b50de-650e-4285-9112-60e139a99998\") " pod="calico-system/csi-node-driver-wh6l9" Mar 7 01:07:12.396922 kubelet[2539]: I0307 01:07:12.396903 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/276b50de-650e-4285-9112-60e139a99998-socket-dir\") pod \"csi-node-driver-wh6l9\" (UID: \"276b50de-650e-4285-9112-60e139a99998\") " pod="calico-system/csi-node-driver-wh6l9" Mar 7 01:07:12.396922 kubelet[2539]: I0307 01:07:12.396922 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq696\" (UniqueName: \"kubernetes.io/projected/276b50de-650e-4285-9112-60e139a99998-kube-api-access-jq696\") pod \"csi-node-driver-wh6l9\" (UID: \"276b50de-650e-4285-9112-60e139a99998\") " pod="calico-system/csi-node-driver-wh6l9" Mar 7 01:07:12.397155 kubelet[2539]: I0307 01:07:12.396944 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/276b50de-650e-4285-9112-60e139a99998-varrun\") pod \"csi-node-driver-wh6l9\" (UID: \"276b50de-650e-4285-9112-60e139a99998\") " pod="calico-system/csi-node-driver-wh6l9" Mar 7 01:07:12.397155 kubelet[2539]: I0307 01:07:12.397020 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/276b50de-650e-4285-9112-60e139a99998-kubelet-dir\") pod \"csi-node-driver-wh6l9\" (UID: \"276b50de-650e-4285-9112-60e139a99998\") " pod="calico-system/csi-node-driver-wh6l9" Mar 7 01:07:12.401799 kubelet[2539]: E0307 01:07:12.398213 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.401799 kubelet[2539]: W0307 01:07:12.398232 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.401799 kubelet[2539]: E0307 01:07:12.398249 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.401799 kubelet[2539]: E0307 01:07:12.398513 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.401799 kubelet[2539]: W0307 01:07:12.398522 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.401799 kubelet[2539]: E0307 01:07:12.398532 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.401799 kubelet[2539]: E0307 01:07:12.398726 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.401799 kubelet[2539]: W0307 01:07:12.398734 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.401799 kubelet[2539]: E0307 01:07:12.398742 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.401799 kubelet[2539]: E0307 01:07:12.398986 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.402083 kubelet[2539]: W0307 01:07:12.398995 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.402083 kubelet[2539]: E0307 01:07:12.399003 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.402083 kubelet[2539]: E0307 01:07:12.399203 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.402083 kubelet[2539]: W0307 01:07:12.399211 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.402083 kubelet[2539]: E0307 01:07:12.399219 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.402083 kubelet[2539]: E0307 01:07:12.399462 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.402083 kubelet[2539]: W0307 01:07:12.399470 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.402083 kubelet[2539]: E0307 01:07:12.399479 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.402083 kubelet[2539]: E0307 01:07:12.399673 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.402083 kubelet[2539]: W0307 01:07:12.399680 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.403987 kubelet[2539]: E0307 01:07:12.399688 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.403987 kubelet[2539]: E0307 01:07:12.399933 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.403987 kubelet[2539]: W0307 01:07:12.399941 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.403987 kubelet[2539]: E0307 01:07:12.399950 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.403987 kubelet[2539]: E0307 01:07:12.400159 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.403987 kubelet[2539]: W0307 01:07:12.400167 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.403987 kubelet[2539]: E0307 01:07:12.400175 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.403987 kubelet[2539]: E0307 01:07:12.401398 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.403987 kubelet[2539]: W0307 01:07:12.401407 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.403987 kubelet[2539]: E0307 01:07:12.401415 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.404212 kubelet[2539]: E0307 01:07:12.401604 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.404212 kubelet[2539]: W0307 01:07:12.401611 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.404212 kubelet[2539]: E0307 01:07:12.401619 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.404212 kubelet[2539]: E0307 01:07:12.402367 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.404212 kubelet[2539]: W0307 01:07:12.402376 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.404212 kubelet[2539]: E0307 01:07:12.402385 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.404212 kubelet[2539]: E0307 01:07:12.404127 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.404212 kubelet[2539]: W0307 01:07:12.404136 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.404212 kubelet[2539]: E0307 01:07:12.404147 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.404423 kubelet[2539]: E0307 01:07:12.404367 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.404423 kubelet[2539]: W0307 01:07:12.404375 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.404423 kubelet[2539]: E0307 01:07:12.404384 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.405494 kubelet[2539]: E0307 01:07:12.405358 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.405494 kubelet[2539]: W0307 01:07:12.405373 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.405494 kubelet[2539]: E0307 01:07:12.405383 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.406293 kubelet[2539]: E0307 01:07:12.406251 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.406936 kubelet[2539]: W0307 01:07:12.406360 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.407088 kubelet[2539]: E0307 01:07:12.407025 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.408629 kubelet[2539]: E0307 01:07:12.408604 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.408629 kubelet[2539]: W0307 01:07:12.408622 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.408629 kubelet[2539]: E0307 01:07:12.408632 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.408981 kubelet[2539]: E0307 01:07:12.408959 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.408981 kubelet[2539]: W0307 01:07:12.408974 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.409042 kubelet[2539]: E0307 01:07:12.408985 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.409322 kubelet[2539]: E0307 01:07:12.409301 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.409322 kubelet[2539]: W0307 01:07:12.409317 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.409380 kubelet[2539]: E0307 01:07:12.409328 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.409642 kubelet[2539]: E0307 01:07:12.409610 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.409642 kubelet[2539]: W0307 01:07:12.409625 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.409700 kubelet[2539]: E0307 01:07:12.409644 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.409925 kubelet[2539]: E0307 01:07:12.409898 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.409925 kubelet[2539]: W0307 01:07:12.409913 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.409925 kubelet[2539]: E0307 01:07:12.409921 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.410696 kubelet[2539]: E0307 01:07:12.410610 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.410696 kubelet[2539]: W0307 01:07:12.410624 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.410696 kubelet[2539]: E0307 01:07:12.410632 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.411470 kubelet[2539]: E0307 01:07:12.411446 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.411601 kubelet[2539]: W0307 01:07:12.411530 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.411601 kubelet[2539]: E0307 01:07:12.411550 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.412250 kubelet[2539]: E0307 01:07:12.412175 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.412250 kubelet[2539]: W0307 01:07:12.412190 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.412250 kubelet[2539]: E0307 01:07:12.412206 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.414516 kubelet[2539]: E0307 01:07:12.414484 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.414516 kubelet[2539]: W0307 01:07:12.414515 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.414516 kubelet[2539]: E0307 01:07:12.414529 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.415687 kubelet[2539]: E0307 01:07:12.415663 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.415687 kubelet[2539]: W0307 01:07:12.415680 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.415687 kubelet[2539]: E0307 01:07:12.415690 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.417662 kubelet[2539]: E0307 01:07:12.416422 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.417662 kubelet[2539]: W0307 01:07:12.416434 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.417662 kubelet[2539]: E0307 01:07:12.416443 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.417662 kubelet[2539]: E0307 01:07:12.417200 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.417662 kubelet[2539]: W0307 01:07:12.417209 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.417662 kubelet[2539]: E0307 01:07:12.417218 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.417998 kubelet[2539]: E0307 01:07:12.417857 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.417998 kubelet[2539]: W0307 01:07:12.417867 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.417998 kubelet[2539]: E0307 01:07:12.417877 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.418162 kubelet[2539]: E0307 01:07:12.418142 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.418162 kubelet[2539]: W0307 01:07:12.418159 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.418231 kubelet[2539]: E0307 01:07:12.418168 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.418921 kubelet[2539]: E0307 01:07:12.418902 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.418921 kubelet[2539]: W0307 01:07:12.418918 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.419012 kubelet[2539]: E0307 01:07:12.418927 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.419577 kubelet[2539]: E0307 01:07:12.419562 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.419577 kubelet[2539]: W0307 01:07:12.419574 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.419672 kubelet[2539]: E0307 01:07:12.419583 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.419828 kubelet[2539]: E0307 01:07:12.419813 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.419828 kubelet[2539]: W0307 01:07:12.419826 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.419878 kubelet[2539]: E0307 01:07:12.419838 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.420082 kubelet[2539]: E0307 01:07:12.420045 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.420082 kubelet[2539]: W0307 01:07:12.420057 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.420082 kubelet[2539]: E0307 01:07:12.420064 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.421193 kubelet[2539]: E0307 01:07:12.420290 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.421193 kubelet[2539]: W0307 01:07:12.420303 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.421193 kubelet[2539]: E0307 01:07:12.420311 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.421193 kubelet[2539]: E0307 01:07:12.420531 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.421193 kubelet[2539]: W0307 01:07:12.420539 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.421193 kubelet[2539]: E0307 01:07:12.420547 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.421193 kubelet[2539]: E0307 01:07:12.420754 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.421193 kubelet[2539]: W0307 01:07:12.420762 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.421193 kubelet[2539]: E0307 01:07:12.420770 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.421193 kubelet[2539]: E0307 01:07:12.421026 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.421530 kubelet[2539]: W0307 01:07:12.421034 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.421530 kubelet[2539]: E0307 01:07:12.421041 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.421982 kubelet[2539]: E0307 01:07:12.421944 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.421982 kubelet[2539]: W0307 01:07:12.421961 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.421982 kubelet[2539]: E0307 01:07:12.421969 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.422587 kubelet[2539]: E0307 01:07:12.422398 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.422587 kubelet[2539]: W0307 01:07:12.422431 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.422587 kubelet[2539]: E0307 01:07:12.422440 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.422948 kubelet[2539]: E0307 01:07:12.422681 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.422948 kubelet[2539]: W0307 01:07:12.422690 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.422948 kubelet[2539]: E0307 01:07:12.422697 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.422948 kubelet[2539]: E0307 01:07:12.422919 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.422948 kubelet[2539]: W0307 01:07:12.422927 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.422948 kubelet[2539]: E0307 01:07:12.422935 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.423229 kubelet[2539]: E0307 01:07:12.423151 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.423229 kubelet[2539]: W0307 01:07:12.423159 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.423229 kubelet[2539]: E0307 01:07:12.423167 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.423600 kubelet[2539]: E0307 01:07:12.423449 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.423600 kubelet[2539]: W0307 01:07:12.423461 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.423600 kubelet[2539]: E0307 01:07:12.423469 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.423814 kubelet[2539]: E0307 01:07:12.423722 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.423814 kubelet[2539]: W0307 01:07:12.423730 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.423814 kubelet[2539]: E0307 01:07:12.423738 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.475158 kubelet[2539]: E0307 01:07:12.475117 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:12.476748 containerd[1457]: time="2026-03-07T01:07:12.476092888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b768f47d6-77jlk,Uid:16a1c8a0-2405-4fba-b8d7-efa3b37fd076,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:12.497854 kubelet[2539]: E0307 01:07:12.497829 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.497979 kubelet[2539]: W0307 01:07:12.497965 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.498072 kubelet[2539]: E0307 01:07:12.498059 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.498556 kubelet[2539]: E0307 01:07:12.498532 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.498737 kubelet[2539]: W0307 01:07:12.498644 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.498737 kubelet[2539]: E0307 01:07:12.498663 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.499447 kubelet[2539]: E0307 01:07:12.499400 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.499447 kubelet[2539]: W0307 01:07:12.499423 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.499447 kubelet[2539]: E0307 01:07:12.499443 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.499725 kubelet[2539]: E0307 01:07:12.499694 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.499767 kubelet[2539]: W0307 01:07:12.499745 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.499767 kubelet[2539]: E0307 01:07:12.499757 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.500253 kubelet[2539]: E0307 01:07:12.500237 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.500322 kubelet[2539]: W0307 01:07:12.500305 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.500322 kubelet[2539]: E0307 01:07:12.500318 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.500793 kubelet[2539]: E0307 01:07:12.500766 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.500793 kubelet[2539]: W0307 01:07:12.500780 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.500793 kubelet[2539]: E0307 01:07:12.500790 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.501191 containerd[1457]: time="2026-03-07T01:07:12.501082367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:12.501191 containerd[1457]: time="2026-03-07T01:07:12.501128428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:12.501191 containerd[1457]: time="2026-03-07T01:07:12.501149008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:12.501375 kubelet[2539]: E0307 01:07:12.501352 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.501375 kubelet[2539]: W0307 01:07:12.501362 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.501375 kubelet[2539]: E0307 01:07:12.501371 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.501909 kubelet[2539]: E0307 01:07:12.501886 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.501909 kubelet[2539]: W0307 01:07:12.501900 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.502013 kubelet[2539]: E0307 01:07:12.501910 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.502037 containerd[1457]: time="2026-03-07T01:07:12.501234749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:12.502514 kubelet[2539]: E0307 01:07:12.502501 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.502614 kubelet[2539]: W0307 01:07:12.502569 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.502614 kubelet[2539]: E0307 01:07:12.502587 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.503057 kubelet[2539]: E0307 01:07:12.503023 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.503057 kubelet[2539]: W0307 01:07:12.503034 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.503057 kubelet[2539]: E0307 01:07:12.503044 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.504480 kubelet[2539]: E0307 01:07:12.504402 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.504480 kubelet[2539]: W0307 01:07:12.504417 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.504480 kubelet[2539]: E0307 01:07:12.504428 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.504705 kubelet[2539]: E0307 01:07:12.504679 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.504705 kubelet[2539]: W0307 01:07:12.504688 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.504705 kubelet[2539]: E0307 01:07:12.504696 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.505024 kubelet[2539]: E0307 01:07:12.504931 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.505024 kubelet[2539]: W0307 01:07:12.504940 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.505024 kubelet[2539]: E0307 01:07:12.504948 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.505377 kubelet[2539]: E0307 01:07:12.505224 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.505377 kubelet[2539]: W0307 01:07:12.505234 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.505377 kubelet[2539]: E0307 01:07:12.505242 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.506655 kubelet[2539]: E0307 01:07:12.506447 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.506655 kubelet[2539]: W0307 01:07:12.506459 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.506655 kubelet[2539]: E0307 01:07:12.506467 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.506848 kubelet[2539]: E0307 01:07:12.506689 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.506848 kubelet[2539]: W0307 01:07:12.506699 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.506848 kubelet[2539]: E0307 01:07:12.506708 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.506993 kubelet[2539]: E0307 01:07:12.506973 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.506993 kubelet[2539]: W0307 01:07:12.506988 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.507066 kubelet[2539]: E0307 01:07:12.506996 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.507364 kubelet[2539]: E0307 01:07:12.507344 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.507364 kubelet[2539]: W0307 01:07:12.507359 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.507715 kubelet[2539]: E0307 01:07:12.507367 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.507715 kubelet[2539]: E0307 01:07:12.507684 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.507715 kubelet[2539]: W0307 01:07:12.507692 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.507715 kubelet[2539]: E0307 01:07:12.507700 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.508222 kubelet[2539]: E0307 01:07:12.507991 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.508222 kubelet[2539]: W0307 01:07:12.508006 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.508222 kubelet[2539]: E0307 01:07:12.508014 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.508587 kubelet[2539]: E0307 01:07:12.508312 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.508587 kubelet[2539]: W0307 01:07:12.508321 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.508587 kubelet[2539]: E0307 01:07:12.508329 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.508587 kubelet[2539]: E0307 01:07:12.508539 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.508587 kubelet[2539]: W0307 01:07:12.508546 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.508587 kubelet[2539]: E0307 01:07:12.508554 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.508901 kubelet[2539]: E0307 01:07:12.508882 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.508901 kubelet[2539]: W0307 01:07:12.508896 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.508949 kubelet[2539]: E0307 01:07:12.508904 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.509205 kubelet[2539]: E0307 01:07:12.509133 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.509205 kubelet[2539]: W0307 01:07:12.509148 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.509205 kubelet[2539]: E0307 01:07:12.509156 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.510130 kubelet[2539]: E0307 01:07:12.509653 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.510130 kubelet[2539]: W0307 01:07:12.509665 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.510130 kubelet[2539]: E0307 01:07:12.509673 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.521779 kubelet[2539]: E0307 01:07:12.521744 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:07:12.521779 kubelet[2539]: W0307 01:07:12.521762 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:07:12.521779 kubelet[2539]: E0307 01:07:12.521774 2539 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:07:12.536427 systemd[1]: Started cri-containerd-96188266ef6eba7e2812164ee4081657b575a7a5011670be4d2cd618a2b4b84c.scope - libcontainer container 96188266ef6eba7e2812164ee4081657b575a7a5011670be4d2cd618a2b4b84c. Mar 7 01:07:12.556302 containerd[1457]: time="2026-03-07T01:07:12.556231512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rszbn,Uid:abf805b0-7331-4d20-a158-cd804b85c9b9,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:12.591002 containerd[1457]: time="2026-03-07T01:07:12.590432323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:12.591002 containerd[1457]: time="2026-03-07T01:07:12.590495433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:12.591002 containerd[1457]: time="2026-03-07T01:07:12.590507043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:12.591002 containerd[1457]: time="2026-03-07T01:07:12.590747376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:12.615572 systemd[1]: Started cri-containerd-25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308.scope - libcontainer container 25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308. Mar 7 01:07:12.618361 containerd[1457]: time="2026-03-07T01:07:12.618328882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b768f47d6-77jlk,Uid:16a1c8a0-2405-4fba-b8d7-efa3b37fd076,Namespace:calico-system,Attempt:0,} returns sandbox id \"96188266ef6eba7e2812164ee4081657b575a7a5011670be4d2cd618a2b4b84c\"" Mar 7 01:07:12.619384 kubelet[2539]: E0307 01:07:12.619344 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:12.623305 containerd[1457]: time="2026-03-07T01:07:12.623021324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:07:12.646964 containerd[1457]: time="2026-03-07T01:07:12.646933026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rszbn,Uid:abf805b0-7331-4d20-a158-cd804b85c9b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\"" Mar 7 01:07:12.904213 update_engine[1444]: I20260307 01:07:12.903831 1444 update_attempter.cc:509] Updating boot flags... Mar 7 01:07:12.948341 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3108) Mar 7 01:07:13.030110 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3109) Mar 7 01:07:13.104343 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3109) Mar 7 01:07:13.314561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143745870.mount: Deactivated successfully. Mar 7 01:07:14.008601 containerd[1457]: time="2026-03-07T01:07:14.007852115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:14.009183 containerd[1457]: time="2026-03-07T01:07:14.009143683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:07:14.009872 containerd[1457]: time="2026-03-07T01:07:14.009845347Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:14.011920 containerd[1457]: time="2026-03-07T01:07:14.011898690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:14.012647 containerd[1457]: time="2026-03-07T01:07:14.012611795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.389562171s" Mar 7 01:07:14.012647 containerd[1457]: time="2026-03-07T01:07:14.012640535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:07:14.013567 containerd[1457]: time="2026-03-07T01:07:14.013541520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:07:14.022828 containerd[1457]: time="2026-03-07T01:07:14.022707197Z" level=info msg="CreateContainer within sandbox \"96188266ef6eba7e2812164ee4081657b575a7a5011670be4d2cd618a2b4b84c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:07:14.033863 containerd[1457]: time="2026-03-07T01:07:14.033838126Z" level=info msg="CreateContainer within sandbox \"96188266ef6eba7e2812164ee4081657b575a7a5011670be4d2cd618a2b4b84c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f1b6697b7164446d5c6422cb58e62d143d72dd2916709299c86352c82396d0f7\"" Mar 7 01:07:14.034737 containerd[1457]: time="2026-03-07T01:07:14.034671882Z" level=info msg="StartContainer for \"f1b6697b7164446d5c6422cb58e62d143d72dd2916709299c86352c82396d0f7\"" Mar 7 01:07:14.064398 systemd[1]: Started cri-containerd-f1b6697b7164446d5c6422cb58e62d143d72dd2916709299c86352c82396d0f7.scope - libcontainer container f1b6697b7164446d5c6422cb58e62d143d72dd2916709299c86352c82396d0f7. Mar 7 01:07:14.108737 containerd[1457]: time="2026-03-07T01:07:14.108687960Z" level=info msg="StartContainer for \"f1b6697b7164446d5c6422cb58e62d143d72dd2916709299c86352c82396d0f7\" returns successfully" Mar 7 01:07:14.621353 containerd[1457]: time="2026-03-07T01:07:14.621315968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:14.622046 containerd[1457]: time="2026-03-07T01:07:14.622007372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:07:14.622360 containerd[1457]: time="2026-03-07T01:07:14.622317504Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:14.624201 containerd[1457]: time="2026-03-07T01:07:14.623994254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:14.625798 containerd[1457]: time="2026-03-07T01:07:14.625493383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 611.925803ms" Mar 7 01:07:14.625798 containerd[1457]: time="2026-03-07T01:07:14.625525303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:07:14.629955 containerd[1457]: time="2026-03-07T01:07:14.629918631Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:07:14.643528 containerd[1457]: time="2026-03-07T01:07:14.643497805Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a\"" Mar 7 01:07:14.644315 containerd[1457]: time="2026-03-07T01:07:14.644286660Z" level=info msg="StartContainer for \"975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a\"" Mar 7 01:07:14.681416 systemd[1]: Started cri-containerd-975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a.scope - libcontainer container 975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a. Mar 7 01:07:14.710781 containerd[1457]: time="2026-03-07T01:07:14.710733382Z" level=info msg="StartContainer for \"975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a\" returns successfully" Mar 7 01:07:14.724246 systemd[1]: cri-containerd-975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a.scope: Deactivated successfully. Mar 7 01:07:14.787699 kubelet[2539]: E0307 01:07:14.786641 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wh6l9" podUID="276b50de-650e-4285-9112-60e139a99998" Mar 7 01:07:14.815646 containerd[1457]: time="2026-03-07T01:07:14.815569012Z" level=info msg="shim disconnected" id=975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a namespace=k8s.io Mar 7 01:07:14.815646 containerd[1457]: time="2026-03-07T01:07:14.815640882Z" level=warning msg="cleaning up after shim disconnected" id=975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a namespace=k8s.io Mar 7 01:07:14.815646 containerd[1457]: time="2026-03-07T01:07:14.815650192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:07:14.872084 containerd[1457]: time="2026-03-07T01:07:14.871712999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:07:14.873307 kubelet[2539]: E0307 01:07:14.873220 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:14.894907 kubelet[2539]: I0307 01:07:14.894624 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b768f47d6-77jlk" podStartSLOduration=1.5037905029999998 podStartE2EDuration="2.894613491s" podCreationTimestamp="2026-03-07 01:07:12 +0000 UTC" firstStartedPulling="2026-03-07 01:07:12.622586771 +0000 UTC m=+18.928774008" lastFinishedPulling="2026-03-07 01:07:14.013409759 +0000 UTC m=+20.319596996" observedRunningTime="2026-03-07 01:07:14.893509655 +0000 UTC m=+21.199696892" watchObservedRunningTime="2026-03-07 01:07:14.894613491 +0000 UTC m=+21.200800738" Mar 7 01:07:15.301722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-975168aea6bc74a99fcfe3af52561d902eba9ee9c1b8aa551e22908244cbf01a-rootfs.mount: Deactivated successfully. Mar 7 01:07:15.874392 kubelet[2539]: E0307 01:07:15.873867 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:16.787115 kubelet[2539]: E0307 01:07:16.786436 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wh6l9" podUID="276b50de-650e-4285-9112-60e139a99998" Mar 7 01:07:16.876217 kubelet[2539]: E0307 01:07:16.876186 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:18.232193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331955.mount: Deactivated successfully. Mar 7 01:07:18.261314 containerd[1457]: time="2026-03-07T01:07:18.260727688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:18.261780 containerd[1457]: time="2026-03-07T01:07:18.261456311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:07:18.263288 containerd[1457]: time="2026-03-07T01:07:18.262111535Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:18.264018 containerd[1457]: time="2026-03-07T01:07:18.263979404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:18.265105 containerd[1457]: time="2026-03-07T01:07:18.264693988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.392944109s" Mar 7 01:07:18.265105 containerd[1457]: time="2026-03-07T01:07:18.264721468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:07:18.269320 containerd[1457]: time="2026-03-07T01:07:18.269235852Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:07:18.285299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723878423.mount: Deactivated successfully. Mar 7 01:07:18.286581 containerd[1457]: time="2026-03-07T01:07:18.286460322Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322\"" Mar 7 01:07:18.288023 containerd[1457]: time="2026-03-07T01:07:18.288001120Z" level=info msg="StartContainer for \"d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322\"" Mar 7 01:07:18.324393 systemd[1]: Started cri-containerd-d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322.scope - libcontainer container d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322. Mar 7 01:07:18.361466 containerd[1457]: time="2026-03-07T01:07:18.361419485Z" level=info msg="StartContainer for \"d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322\" returns successfully" Mar 7 01:07:18.404597 systemd[1]: cri-containerd-d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322.scope: Deactivated successfully. Mar 7 01:07:18.512052 containerd[1457]: time="2026-03-07T01:07:18.511343272Z" level=info msg="shim disconnected" id=d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322 namespace=k8s.io Mar 7 01:07:18.512052 containerd[1457]: time="2026-03-07T01:07:18.511397642Z" level=warning msg="cleaning up after shim disconnected" id=d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322 namespace=k8s.io Mar 7 01:07:18.512052 containerd[1457]: time="2026-03-07T01:07:18.511407422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:07:18.786610 kubelet[2539]: E0307 01:07:18.786490 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wh6l9" podUID="276b50de-650e-4285-9112-60e139a99998" Mar 7 01:07:18.884166 containerd[1457]: time="2026-03-07T01:07:18.884104975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:07:19.233229 systemd[1]: run-containerd-runc-k8s.io-d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322-runc.zC82Vz.mount: Deactivated successfully. Mar 7 01:07:19.233391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0b1479d268f5e9caee2bcd3380947e9d92f85b7ead0713f8091ed36ec954322-rootfs.mount: Deactivated successfully. Mar 7 01:07:20.644788 containerd[1457]: time="2026-03-07T01:07:20.644720801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:20.645869 containerd[1457]: time="2026-03-07T01:07:20.645808686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:07:20.646312 containerd[1457]: time="2026-03-07T01:07:20.646235878Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:20.648822 containerd[1457]: time="2026-03-07T01:07:20.648773091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:20.650415 containerd[1457]: time="2026-03-07T01:07:20.649810806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.765654641s" Mar 7 01:07:20.650415 containerd[1457]: time="2026-03-07T01:07:20.649846796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:07:20.653630 containerd[1457]: time="2026-03-07T01:07:20.653584073Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:07:20.669381 containerd[1457]: time="2026-03-07T01:07:20.669330380Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22\"" Mar 7 01:07:20.670906 containerd[1457]: time="2026-03-07T01:07:20.670872168Z" level=info msg="StartContainer for \"55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22\"" Mar 7 01:07:20.705395 systemd[1]: run-containerd-runc-k8s.io-55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22-runc.8ZEbdo.mount: Deactivated successfully. Mar 7 01:07:20.719388 systemd[1]: Started cri-containerd-55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22.scope - libcontainer container 55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22. Mar 7 01:07:20.748136 containerd[1457]: time="2026-03-07T01:07:20.748097252Z" level=info msg="StartContainer for \"55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22\" returns successfully" Mar 7 01:07:20.792006 kubelet[2539]: E0307 01:07:20.791600 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wh6l9" podUID="276b50de-650e-4285-9112-60e139a99998" Mar 7 01:07:21.339557 systemd[1]: cri-containerd-55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22.scope: Deactivated successfully. Mar 7 01:07:21.349313 kubelet[2539]: I0307 01:07:21.348563 2539 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:07:21.387034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22-rootfs.mount: Deactivated successfully. Mar 7 01:07:21.421541 systemd[1]: Created slice kubepods-burstable-podee7f7577_2f7b_4c36_bfd3_e0c694ed04f3.slice - libcontainer container kubepods-burstable-podee7f7577_2f7b_4c36_bfd3_e0c694ed04f3.slice. Mar 7 01:07:21.434116 containerd[1457]: time="2026-03-07T01:07:21.433499869Z" level=info msg="shim disconnected" id=55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22 namespace=k8s.io Mar 7 01:07:21.434116 containerd[1457]: time="2026-03-07T01:07:21.433587629Z" level=warning msg="cleaning up after shim disconnected" id=55ac9e934d1a43e776f62ec02cacec7e35fa5503539ddf065dc3a4d996c1bd22 namespace=k8s.io Mar 7 01:07:21.434116 containerd[1457]: time="2026-03-07T01:07:21.433598850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:07:21.441053 systemd[1]: Created slice kubepods-besteffort-podf81e418a_c0cc_43f5_9c0e_efe353ae4eca.slice - libcontainer container kubepods-besteffort-podf81e418a_c0cc_43f5_9c0e_efe353ae4eca.slice. Mar 7 01:07:21.459766 systemd[1]: Created slice kubepods-burstable-podfc0c6376_e9a6_43ac_9b83_1647076d0c22.slice - libcontainer container kubepods-burstable-podfc0c6376_e9a6_43ac_9b83_1647076d0c22.slice. Mar 7 01:07:21.469801 containerd[1457]: time="2026-03-07T01:07:21.468728323Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:07:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:07:21.476180 kubelet[2539]: I0307 01:07:21.476141 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-nginx-config\") pod \"whisker-98c65c778-2zvfj\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " pod="calico-system/whisker-98c65c778-2zvfj" Mar 7 01:07:21.477535 kubelet[2539]: I0307 01:07:21.476190 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhd7w\" (UniqueName: \"kubernetes.io/projected/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-kube-api-access-lhd7w\") pod \"whisker-98c65c778-2zvfj\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " pod="calico-system/whisker-98c65c778-2zvfj" Mar 7 01:07:21.477535 kubelet[2539]: I0307 01:07:21.476227 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1cf94d0a-9aa0-4302-a083-681de00390a5-calico-apiserver-certs\") pod \"calico-apiserver-7fd8959695-wdzzd\" (UID: \"1cf94d0a-9aa0-4302-a083-681de00390a5\") " pod="calico-system/calico-apiserver-7fd8959695-wdzzd" Mar 7 01:07:21.477535 kubelet[2539]: I0307 01:07:21.476310 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4622\" (UniqueName: \"kubernetes.io/projected/ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3-kube-api-access-d4622\") pod \"coredns-674b8bbfcf-9wt7m\" (UID: \"ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3\") " pod="kube-system/coredns-674b8bbfcf-9wt7m" Mar 7 01:07:21.477535 kubelet[2539]: I0307 01:07:21.477055 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-backend-key-pair\") pod \"whisker-98c65c778-2zvfj\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " pod="calico-system/whisker-98c65c778-2zvfj" Mar 7 01:07:21.477535 kubelet[2539]: I0307 01:07:21.477086 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc0c6376-e9a6-43ac-9b83-1647076d0c22-config-volume\") pod \"coredns-674b8bbfcf-shk8j\" (UID: \"fc0c6376-e9a6-43ac-9b83-1647076d0c22\") " pod="kube-system/coredns-674b8bbfcf-shk8j" Mar 7 01:07:21.477671 kubelet[2539]: I0307 01:07:21.477196 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlgc\" (UniqueName: \"kubernetes.io/projected/fc0c6376-e9a6-43ac-9b83-1647076d0c22-kube-api-access-swlgc\") pod \"coredns-674b8bbfcf-shk8j\" (UID: \"fc0c6376-e9a6-43ac-9b83-1647076d0c22\") " pod="kube-system/coredns-674b8bbfcf-shk8j" Mar 7 01:07:21.477671 kubelet[2539]: I0307 01:07:21.477228 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-ca-bundle\") pod \"whisker-98c65c778-2zvfj\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " pod="calico-system/whisker-98c65c778-2zvfj" Mar 7 01:07:21.477671 kubelet[2539]: I0307 01:07:21.477285 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fc6\" (UniqueName: \"kubernetes.io/projected/1cf94d0a-9aa0-4302-a083-681de00390a5-kube-api-access-x7fc6\") pod \"calico-apiserver-7fd8959695-wdzzd\" (UID: \"1cf94d0a-9aa0-4302-a083-681de00390a5\") " pod="calico-system/calico-apiserver-7fd8959695-wdzzd" Mar 7 01:07:21.477671 kubelet[2539]: I0307 01:07:21.477305 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3-config-volume\") pod \"coredns-674b8bbfcf-9wt7m\" (UID: \"ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3\") " pod="kube-system/coredns-674b8bbfcf-9wt7m" Mar 7 01:07:21.479566 systemd[1]: Created slice kubepods-besteffort-pod1cf94d0a_9aa0_4302_a083_681de00390a5.slice - libcontainer container kubepods-besteffort-pod1cf94d0a_9aa0_4302_a083_681de00390a5.slice. Mar 7 01:07:21.492890 systemd[1]: Created slice kubepods-besteffort-podec64a8fa_f04f_42f8_b5ba_1f4af3044695.slice - libcontainer container kubepods-besteffort-podec64a8fa_f04f_42f8_b5ba_1f4af3044695.slice. Mar 7 01:07:21.503493 systemd[1]: Created slice kubepods-besteffort-podb2c5388f_e7db_4945_8a7f_ca5fddbf9992.slice - libcontainer container kubepods-besteffort-podb2c5388f_e7db_4945_8a7f_ca5fddbf9992.slice. Mar 7 01:07:21.510919 systemd[1]: Created slice kubepods-besteffort-pod2754896e_11fe_452b_a84a_c172f3237c2d.slice - libcontainer container kubepods-besteffort-pod2754896e_11fe_452b_a84a_c172f3237c2d.slice. Mar 7 01:07:21.580463 kubelet[2539]: I0307 01:07:21.577727 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2754896e-11fe-452b-a84a-c172f3237c2d-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-pg6fp\" (UID: \"2754896e-11fe-452b-a84a-c172f3237c2d\") " pod="calico-system/goldmane-5b85766d88-pg6fp" Mar 7 01:07:21.580463 kubelet[2539]: I0307 01:07:21.577807 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5sxp\" (UniqueName: \"kubernetes.io/projected/ec64a8fa-f04f-42f8-b5ba-1f4af3044695-kube-api-access-f5sxp\") pod \"calico-apiserver-7fd8959695-r25tl\" (UID: \"ec64a8fa-f04f-42f8-b5ba-1f4af3044695\") " pod="calico-system/calico-apiserver-7fd8959695-r25tl" Mar 7 01:07:21.580463 kubelet[2539]: I0307 01:07:21.577884 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2754896e-11fe-452b-a84a-c172f3237c2d-config\") pod \"goldmane-5b85766d88-pg6fp\" (UID: \"2754896e-11fe-452b-a84a-c172f3237c2d\") " pod="calico-system/goldmane-5b85766d88-pg6fp" Mar 7 01:07:21.580463 kubelet[2539]: I0307 01:07:21.577938 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec64a8fa-f04f-42f8-b5ba-1f4af3044695-calico-apiserver-certs\") pod \"calico-apiserver-7fd8959695-r25tl\" (UID: \"ec64a8fa-f04f-42f8-b5ba-1f4af3044695\") " pod="calico-system/calico-apiserver-7fd8959695-r25tl" Mar 7 01:07:21.580463 kubelet[2539]: I0307 01:07:21.577956 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvg6\" (UniqueName: \"kubernetes.io/projected/2754896e-11fe-452b-a84a-c172f3237c2d-kube-api-access-tzvg6\") pod \"goldmane-5b85766d88-pg6fp\" (UID: \"2754896e-11fe-452b-a84a-c172f3237c2d\") " pod="calico-system/goldmane-5b85766d88-pg6fp" Mar 7 01:07:21.580683 kubelet[2539]: I0307 01:07:21.577974 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c5388f-e7db-4945-8a7f-ca5fddbf9992-tigera-ca-bundle\") pod \"calico-kube-controllers-84fcdd589f-7w9zs\" (UID: \"b2c5388f-e7db-4945-8a7f-ca5fddbf9992\") " pod="calico-system/calico-kube-controllers-84fcdd589f-7w9zs" Mar 7 01:07:21.582057 kubelet[2539]: I0307 01:07:21.581984 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2754896e-11fe-452b-a84a-c172f3237c2d-goldmane-key-pair\") pod \"goldmane-5b85766d88-pg6fp\" (UID: \"2754896e-11fe-452b-a84a-c172f3237c2d\") " pod="calico-system/goldmane-5b85766d88-pg6fp" Mar 7 01:07:21.582721 kubelet[2539]: I0307 01:07:21.582704 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbfp\" (UniqueName: \"kubernetes.io/projected/b2c5388f-e7db-4945-8a7f-ca5fddbf9992-kube-api-access-sfbfp\") pod \"calico-kube-controllers-84fcdd589f-7w9zs\" (UID: \"b2c5388f-e7db-4945-8a7f-ca5fddbf9992\") " pod="calico-system/calico-kube-controllers-84fcdd589f-7w9zs" Mar 7 01:07:21.732007 kubelet[2539]: E0307 01:07:21.731971 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:21.732876 containerd[1457]: time="2026-03-07T01:07:21.732826247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9wt7m,Uid:ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3,Namespace:kube-system,Attempt:0,}" Mar 7 01:07:21.760058 containerd[1457]: time="2026-03-07T01:07:21.760013955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98c65c778-2zvfj,Uid:f81e418a-c0cc-43f5-9c0e-efe353ae4eca,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:21.770631 kubelet[2539]: E0307 01:07:21.770605 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:21.774550 containerd[1457]: time="2026-03-07T01:07:21.774274531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shk8j,Uid:fc0c6376-e9a6-43ac-9b83-1647076d0c22,Namespace:kube-system,Attempt:0,}" Mar 7 01:07:21.788575 containerd[1457]: time="2026-03-07T01:07:21.788535198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-wdzzd,Uid:1cf94d0a-9aa0-4302-a083-681de00390a5,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:21.802312 containerd[1457]: time="2026-03-07T01:07:21.802251112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-r25tl,Uid:ec64a8fa-f04f-42f8-b5ba-1f4af3044695,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:21.810549 containerd[1457]: time="2026-03-07T01:07:21.810502891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fcdd589f-7w9zs,Uid:b2c5388f-e7db-4945-8a7f-ca5fddbf9992,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:21.816346 containerd[1457]: time="2026-03-07T01:07:21.816248277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pg6fp,Uid:2754896e-11fe-452b-a84a-c172f3237c2d,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:21.917348 containerd[1457]: time="2026-03-07T01:07:21.917239739Z" level=error msg="Failed to destroy network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.931298 containerd[1457]: time="2026-03-07T01:07:21.930375750Z" level=error msg="encountered an error cleaning up failed sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.931298 containerd[1457]: time="2026-03-07T01:07:21.930681412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-98c65c778-2zvfj,Uid:f81e418a-c0cc-43f5-9c0e-efe353ae4eca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.932315 kubelet[2539]: E0307 01:07:21.932186 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.932315 kubelet[2539]: E0307 01:07:21.932243 2539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-98c65c778-2zvfj" Mar 7 01:07:21.932680 kubelet[2539]: E0307 01:07:21.932424 2539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-98c65c778-2zvfj" Mar 7 01:07:21.933758 kubelet[2539]: E0307 01:07:21.932772 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-98c65c778-2zvfj_calico-system(f81e418a-c0cc-43f5-9c0e-efe353ae4eca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-98c65c778-2zvfj_calico-system(f81e418a-c0cc-43f5-9c0e-efe353ae4eca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-98c65c778-2zvfj" podUID="f81e418a-c0cc-43f5-9c0e-efe353ae4eca" Mar 7 01:07:21.936564 containerd[1457]: time="2026-03-07T01:07:21.936529369Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:07:21.942550 containerd[1457]: time="2026-03-07T01:07:21.942501697Z" level=error msg="Failed to destroy network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.942892 containerd[1457]: time="2026-03-07T01:07:21.942858068Z" level=error msg="encountered an error cleaning up failed sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.942936 containerd[1457]: time="2026-03-07T01:07:21.942907809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9wt7m,Uid:ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.943082 kubelet[2539]: E0307 01:07:21.943050 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:21.943125 kubelet[2539]: E0307 01:07:21.943100 2539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9wt7m" Mar 7 01:07:21.943125 kubelet[2539]: E0307 01:07:21.943119 2539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9wt7m" Mar 7 01:07:21.943190 kubelet[2539]: E0307 01:07:21.943158 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9wt7m_kube-system(ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9wt7m_kube-system(ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9wt7m" podUID="ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3" Mar 7 01:07:21.972699 containerd[1457]: time="2026-03-07T01:07:21.972665458Z" level=info msg="CreateContainer within sandbox \"25867ee874b88ed87773bdfc471b3fea57c53b46085153a0f3d2816cc3714308\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5b2a661484045f679f28ce19d5760408b31254121040293f74ce90d6585646b6\"" Mar 7 01:07:21.974123 containerd[1457]: time="2026-03-07T01:07:21.973609752Z" level=info msg="StartContainer for \"5b2a661484045f679f28ce19d5760408b31254121040293f74ce90d6585646b6\"" Mar 7 01:07:22.004299 containerd[1457]: time="2026-03-07T01:07:22.004115885Z" level=error msg="Failed to destroy network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.004772 containerd[1457]: time="2026-03-07T01:07:22.004646387Z" level=error msg="encountered an error cleaning up failed sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.004772 containerd[1457]: time="2026-03-07T01:07:22.004693517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shk8j,Uid:fc0c6376-e9a6-43ac-9b83-1647076d0c22,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.006355 kubelet[2539]: E0307 01:07:22.005663 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.006355 kubelet[2539]: E0307 01:07:22.005741 2539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-shk8j" Mar 7 01:07:22.006355 kubelet[2539]: E0307 01:07:22.005763 2539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-shk8j" Mar 7 01:07:22.006498 kubelet[2539]: E0307 01:07:22.005805 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-shk8j_kube-system(fc0c6376-e9a6-43ac-9b83-1647076d0c22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-shk8j_kube-system(fc0c6376-e9a6-43ac-9b83-1647076d0c22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-shk8j" podUID="fc0c6376-e9a6-43ac-9b83-1647076d0c22" Mar 7 01:07:22.030632 systemd[1]: Started cri-containerd-5b2a661484045f679f28ce19d5760408b31254121040293f74ce90d6585646b6.scope - libcontainer container 5b2a661484045f679f28ce19d5760408b31254121040293f74ce90d6585646b6. Mar 7 01:07:22.055926 containerd[1457]: time="2026-03-07T01:07:22.055867597Z" level=error msg="Failed to destroy network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.056562 containerd[1457]: time="2026-03-07T01:07:22.056466241Z" level=error msg="encountered an error cleaning up failed sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.056562 containerd[1457]: time="2026-03-07T01:07:22.056522301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-r25tl,Uid:ec64a8fa-f04f-42f8-b5ba-1f4af3044695,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.057577 kubelet[2539]: E0307 01:07:22.056818 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.057577 kubelet[2539]: E0307 01:07:22.056886 2539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fd8959695-r25tl" Mar 7 01:07:22.057577 kubelet[2539]: E0307 01:07:22.056906 2539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fd8959695-r25tl" Mar 7 01:07:22.057742 kubelet[2539]: E0307 01:07:22.056950 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd8959695-r25tl_calico-system(ec64a8fa-f04f-42f8-b5ba-1f4af3044695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd8959695-r25tl_calico-system(ec64a8fa-f04f-42f8-b5ba-1f4af3044695)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7fd8959695-r25tl" podUID="ec64a8fa-f04f-42f8-b5ba-1f4af3044695" Mar 7 01:07:22.066074 containerd[1457]: time="2026-03-07T01:07:22.066037644Z" level=error msg="Failed to destroy network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.067155 containerd[1457]: time="2026-03-07T01:07:22.066694326Z" level=error msg="encountered an error cleaning up failed sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.067370 containerd[1457]: time="2026-03-07T01:07:22.067294169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-wdzzd,Uid:1cf94d0a-9aa0-4302-a083-681de00390a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.068337 kubelet[2539]: E0307 01:07:22.067603 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.068337 kubelet[2539]: E0307 01:07:22.067645 2539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fd8959695-wdzzd" Mar 7 01:07:22.068337 kubelet[2539]: E0307 01:07:22.067663 2539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fd8959695-wdzzd" Mar 7 01:07:22.068435 kubelet[2539]: E0307 01:07:22.067697 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd8959695-wdzzd_calico-system(1cf94d0a-9aa0-4302-a083-681de00390a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd8959695-wdzzd_calico-system(1cf94d0a-9aa0-4302-a083-681de00390a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7fd8959695-wdzzd" podUID="1cf94d0a-9aa0-4302-a083-681de00390a5" Mar 7 01:07:22.079076 containerd[1457]: time="2026-03-07T01:07:22.079040002Z" level=error msg="Failed to destroy network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.080644 containerd[1457]: time="2026-03-07T01:07:22.080608649Z" level=error msg="encountered an error cleaning up failed sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.080781 containerd[1457]: time="2026-03-07T01:07:22.080733499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fcdd589f-7w9zs,Uid:b2c5388f-e7db-4945-8a7f-ca5fddbf9992,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.081670 kubelet[2539]: E0307 01:07:22.081088 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.081670 kubelet[2539]: E0307 01:07:22.081131 2539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fcdd589f-7w9zs" Mar 7 01:07:22.081670 kubelet[2539]: E0307 01:07:22.081148 2539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fcdd589f-7w9zs" Mar 7 01:07:22.081774 kubelet[2539]: E0307 01:07:22.081187 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84fcdd589f-7w9zs_calico-system(b2c5388f-e7db-4945-8a7f-ca5fddbf9992)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84fcdd589f-7w9zs_calico-system(b2c5388f-e7db-4945-8a7f-ca5fddbf9992)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fcdd589f-7w9zs" podUID="b2c5388f-e7db-4945-8a7f-ca5fddbf9992" Mar 7 01:07:22.082857 containerd[1457]: time="2026-03-07T01:07:22.082762338Z" level=error msg="Failed to destroy network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.083406 containerd[1457]: time="2026-03-07T01:07:22.083358591Z" level=error msg="encountered an error cleaning up failed sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.083559 containerd[1457]: time="2026-03-07T01:07:22.083504902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pg6fp,Uid:2754896e-11fe-452b-a84a-c172f3237c2d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.084528 kubelet[2539]: E0307 01:07:22.084455 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:07:22.084528 kubelet[2539]: E0307 01:07:22.084493 2539 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pg6fp" Mar 7 01:07:22.084708 kubelet[2539]: E0307 01:07:22.084618 2539 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pg6fp" Mar 7 01:07:22.085387 kubelet[2539]: E0307 01:07:22.084668 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-pg6fp_calico-system(2754896e-11fe-452b-a84a-c172f3237c2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-pg6fp_calico-system(2754896e-11fe-452b-a84a-c172f3237c2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-pg6fp" podUID="2754896e-11fe-452b-a84a-c172f3237c2d" Mar 7 01:07:22.102563 containerd[1457]: time="2026-03-07T01:07:22.102521888Z" level=info msg="StartContainer for \"5b2a661484045f679f28ce19d5760408b31254121040293f74ce90d6585646b6\" returns successfully" Mar 7 01:07:22.671179 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d-shm.mount: Deactivated successfully. Mar 7 01:07:22.793091 systemd[1]: Created slice kubepods-besteffort-pod276b50de_650e_4285_9112_60e139a99998.slice - libcontainer container kubepods-besteffort-pod276b50de_650e_4285_9112_60e139a99998.slice. Mar 7 01:07:22.796032 containerd[1457]: time="2026-03-07T01:07:22.795960903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wh6l9,Uid:276b50de-650e-4285-9112-60e139a99998,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:22.910889 kubelet[2539]: I0307 01:07:22.910845 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:22.913802 containerd[1457]: time="2026-03-07T01:07:22.913055110Z" level=info msg="StopPodSandbox for \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\"" Mar 7 01:07:22.913802 containerd[1457]: time="2026-03-07T01:07:22.913210261Z" level=info msg="Ensure that sandbox 31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df in task-service has been cleanup successfully" Mar 7 01:07:22.918507 kubelet[2539]: I0307 01:07:22.918475 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:22.918992 containerd[1457]: time="2026-03-07T01:07:22.918940837Z" level=info msg="StopPodSandbox for \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\"" Mar 7 01:07:22.919148 containerd[1457]: time="2026-03-07T01:07:22.919066557Z" level=info msg="Ensure that sandbox 40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e in task-service has been cleanup successfully" Mar 7 01:07:22.922758 kubelet[2539]: I0307 01:07:22.922633 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:22.923080 containerd[1457]: time="2026-03-07T01:07:22.922978315Z" level=info msg="StopPodSandbox for \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\"" Mar 7 01:07:22.923290 containerd[1457]: time="2026-03-07T01:07:22.923097715Z" level=info msg="Ensure that sandbox 247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575 in task-service has been cleanup successfully" Mar 7 01:07:22.926755 kubelet[2539]: I0307 01:07:22.926667 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:22.928622 containerd[1457]: time="2026-03-07T01:07:22.927464635Z" level=info msg="StopPodSandbox for \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\"" Mar 7 01:07:22.930150 containerd[1457]: time="2026-03-07T01:07:22.930118088Z" level=info msg="Ensure that sandbox ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526 in task-service has been cleanup successfully" Mar 7 01:07:22.933120 kubelet[2539]: I0307 01:07:22.933075 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rszbn" podStartSLOduration=2.931413 podStartE2EDuration="10.93306389s" podCreationTimestamp="2026-03-07 01:07:12 +0000 UTC" firstStartedPulling="2026-03-07 01:07:12.648880499 +0000 UTC m=+18.955067746" lastFinishedPulling="2026-03-07 01:07:20.650531399 +0000 UTC m=+26.956718636" observedRunningTime="2026-03-07 01:07:22.931938795 +0000 UTC m=+29.238126032" watchObservedRunningTime="2026-03-07 01:07:22.93306389 +0000 UTC m=+29.239251127" Mar 7 01:07:22.951888 kubelet[2539]: I0307 01:07:22.951856 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:22.954428 containerd[1457]: time="2026-03-07T01:07:22.954397646Z" level=info msg="StopPodSandbox for \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\"" Mar 7 01:07:22.954700 containerd[1457]: time="2026-03-07T01:07:22.954653228Z" level=info msg="Ensure that sandbox fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d in task-service has been cleanup successfully" Mar 7 01:07:22.963648 systemd-networkd[1382]: cali1f5cdc19fb9: Link UP Mar 7 01:07:22.964545 systemd-networkd[1382]: cali1f5cdc19fb9: Gained carrier Mar 7 01:07:22.971834 kubelet[2539]: I0307 01:07:22.970838 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:22.978377 containerd[1457]: time="2026-03-07T01:07:22.977074878Z" level=info msg="StopPodSandbox for \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\"" Mar 7 01:07:22.979792 containerd[1457]: time="2026-03-07T01:07:22.979769471Z" level=info msg="Ensure that sandbox 807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5 in task-service has been cleanup successfully" Mar 7 01:07:23.005061 kubelet[2539]: I0307 01:07:23.004763 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:23.007313 containerd[1457]: time="2026-03-07T01:07:23.007249923Z" level=info msg="StopPodSandbox for \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\"" Mar 7 01:07:23.007657 containerd[1457]: time="2026-03-07T01:07:23.007637875Z" level=info msg="Ensure that sandbox 0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98 in task-service has been cleanup successfully" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.828 [ERROR][3632] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.849 [INFO][3632] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-csi--node--driver--wh6l9-eth0 csi-node-driver- calico-system 276b50de-650e-4285-9112-60e139a99998 707 0 2026-03-07 01:07:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-198-121 csi-node-driver-wh6l9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1f5cdc19fb9 [] [] }} ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.849 [INFO][3632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.876 [INFO][3644] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" HandleID="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Workload="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.882 [INFO][3644] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" HandleID="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Workload="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-198-121", "pod":"csi-node-driver-wh6l9", "timestamp":"2026-03-07 01:07:22.876164685 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000246c60)} Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.882 [INFO][3644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.882 [INFO][3644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.882 [INFO][3644] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.885 [INFO][3644] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.891 [INFO][3644] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.896 [INFO][3644] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.903 [INFO][3644] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.905 [INFO][3644] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.905 [INFO][3644] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.907 [INFO][3644] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75 Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.913 [INFO][3644] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.924 [INFO][3644] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.193/26] block=192.168.26.192/26 handle="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.924 [INFO][3644] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.193/26] handle="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" host="172-239-198-121" Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.924 [INFO][3644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.023535 containerd[1457]: 2026-03-07 01:07:22.924 [INFO][3644] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.193/26] IPv6=[] ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" HandleID="k8s-pod-network.90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Workload="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" Mar 7 01:07:23.024035 containerd[1457]: 2026-03-07 01:07:22.945 [INFO][3632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-csi--node--driver--wh6l9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"276b50de-650e-4285-9112-60e139a99998", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"csi-node-driver-wh6l9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f5cdc19fb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:23.024035 containerd[1457]: 2026-03-07 01:07:22.945 [INFO][3632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.193/32] ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" Mar 7 01:07:23.024035 containerd[1457]: 2026-03-07 01:07:22.945 [INFO][3632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f5cdc19fb9 ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" Mar 7 01:07:23.024035 containerd[1457]: 2026-03-07 01:07:22.966 [INFO][3632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" Mar 7 01:07:23.024035 containerd[1457]: 2026-03-07 01:07:22.968 [INFO][3632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-csi--node--driver--wh6l9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"276b50de-650e-4285-9112-60e139a99998", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75", Pod:"csi-node-driver-wh6l9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f5cdc19fb9", MAC:"8e:f6:c8:1d:29:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:23.024035 containerd[1457]: 2026-03-07 01:07:23.002 [INFO][3632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75" Namespace="calico-system" Pod="csi-node-driver-wh6l9" WorkloadEndpoint="172--239--198--121-k8s-csi--node--driver--wh6l9-eth0" Mar 7 01:07:23.157480 containerd[1457]: time="2026-03-07T01:07:23.157004733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:23.157480 containerd[1457]: time="2026-03-07T01:07:23.157056143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:23.157480 containerd[1457]: time="2026-03-07T01:07:23.157069993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:23.157480 containerd[1457]: time="2026-03-07T01:07:23.157148854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.079 [INFO][3719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.079 [INFO][3719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" iface="eth0" netns="/var/run/netns/cni-756a8bf0-9e9d-0f42-9270-68dc81bac0fb" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.081 [INFO][3719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" iface="eth0" netns="/var/run/netns/cni-756a8bf0-9e9d-0f42-9270-68dc81bac0fb" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.082 [INFO][3719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" iface="eth0" netns="/var/run/netns/cni-756a8bf0-9e9d-0f42-9270-68dc81bac0fb" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.082 [INFO][3719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.082 [INFO][3719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.121 [INFO][3762] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.122 [INFO][3762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.122 [INFO][3762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.142 [WARNING][3762] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.142 [INFO][3762] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.146 [INFO][3762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.175048 containerd[1457]: 2026-03-07 01:07:23.165 [INFO][3719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:23.175048 containerd[1457]: time="2026-03-07T01:07:23.174052917Z" level=info msg="TearDown network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\" successfully" Mar 7 01:07:23.175048 containerd[1457]: time="2026-03-07T01:07:23.174076397Z" level=info msg="StopPodSandbox for \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\" returns successfully" Mar 7 01:07:23.175710 kubelet[2539]: E0307 01:07:23.174537 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:23.177912 containerd[1457]: time="2026-03-07T01:07:23.176472058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9wt7m,Uid:ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3,Namespace:kube-system,Attempt:1,}" Mar 7 01:07:23.184624 systemd[1]: run-netns-cni\x2d756a8bf0\x2d9e9d\x2d0f42\x2d9270\x2d68dc81bac0fb.mount: Deactivated successfully. Mar 7 01:07:23.289430 systemd[1]: Started cri-containerd-90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75.scope - libcontainer container 90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75. Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.114 [INFO][3717] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.115 [INFO][3717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" iface="eth0" netns="/var/run/netns/cni-8708fedb-b116-19b2-8a0d-e51c3561defb" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.115 [INFO][3717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" iface="eth0" netns="/var/run/netns/cni-8708fedb-b116-19b2-8a0d-e51c3561defb" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.116 [INFO][3717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" iface="eth0" netns="/var/run/netns/cni-8708fedb-b116-19b2-8a0d-e51c3561defb" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.116 [INFO][3717] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.116 [INFO][3717] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.262 [INFO][3774] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.262 [INFO][3774] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.263 [INFO][3774] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.275 [WARNING][3774] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.275 [INFO][3774] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.281 [INFO][3774] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.319199 containerd[1457]: 2026-03-07 01:07:23.307 [INFO][3717] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:23.319199 containerd[1457]: time="2026-03-07T01:07:23.319148548Z" level=info msg="TearDown network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\" successfully" Mar 7 01:07:23.320700 containerd[1457]: time="2026-03-07T01:07:23.319170908Z" level=info msg="StopPodSandbox for \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\" returns successfully" Mar 7 01:07:23.320727 kubelet[2539]: E0307 01:07:23.319838 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:23.321912 containerd[1457]: time="2026-03-07T01:07:23.321737120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shk8j,Uid:fc0c6376-e9a6-43ac-9b83-1647076d0c22,Namespace:kube-system,Attempt:1,}" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.227 [INFO][3678] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.228 [INFO][3678] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" iface="eth0" netns="/var/run/netns/cni-51fd9f00-2033-8542-81b9-6fccf0692b77" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.230 [INFO][3678] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" iface="eth0" netns="/var/run/netns/cni-51fd9f00-2033-8542-81b9-6fccf0692b77" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.232 [INFO][3678] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" iface="eth0" netns="/var/run/netns/cni-51fd9f00-2033-8542-81b9-6fccf0692b77" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.232 [INFO][3678] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.232 [INFO][3678] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.329 [INFO][3819] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.330 [INFO][3819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.330 [INFO][3819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.354 [WARNING][3819] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.354 [INFO][3819] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.356 [INFO][3819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.381349 containerd[1457]: 2026-03-07 01:07:23.361 [INFO][3678] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:23.383858 containerd[1457]: time="2026-03-07T01:07:23.383468428Z" level=info msg="TearDown network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\" successfully" Mar 7 01:07:23.383858 containerd[1457]: time="2026-03-07T01:07:23.383492828Z" level=info msg="StopPodSandbox for \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\" returns successfully" Mar 7 01:07:23.384664 containerd[1457]: time="2026-03-07T01:07:23.384353772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-wdzzd,Uid:1cf94d0a-9aa0-4302-a083-681de00390a5,Namespace:calico-system,Attempt:1,}" Mar 7 01:07:23.398880 containerd[1457]: time="2026-03-07T01:07:23.397644339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wh6l9,Uid:276b50de-650e-4285-9112-60e139a99998,Namespace:calico-system,Attempt:0,} returns sandbox id \"90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75\"" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.242 [INFO][3748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.242 [INFO][3748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" iface="eth0" netns="/var/run/netns/cni-aed43c14-6cc0-70dc-cddd-dd8e4a6e5add" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.242 [INFO][3748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" iface="eth0" netns="/var/run/netns/cni-aed43c14-6cc0-70dc-cddd-dd8e4a6e5add" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.242 [INFO][3748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" iface="eth0" netns="/var/run/netns/cni-aed43c14-6cc0-70dc-cddd-dd8e4a6e5add" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.242 [INFO][3748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.242 [INFO][3748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.346 [INFO][3821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.346 [INFO][3821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.357 [INFO][3821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.366 [WARNING][3821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.366 [INFO][3821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.369 [INFO][3821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.399209 containerd[1457]: 2026-03-07 01:07:23.384 [INFO][3748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:23.399936 containerd[1457]: time="2026-03-07T01:07:23.399725699Z" level=info msg="TearDown network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\" successfully" Mar 7 01:07:23.399936 containerd[1457]: time="2026-03-07T01:07:23.399744919Z" level=info msg="StopPodSandbox for \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\" returns successfully" Mar 7 01:07:23.400396 containerd[1457]: time="2026-03-07T01:07:23.400233141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-r25tl,Uid:ec64a8fa-f04f-42f8-b5ba-1f4af3044695,Namespace:calico-system,Attempt:1,}" Mar 7 01:07:23.403043 containerd[1457]: time="2026-03-07T01:07:23.403010143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.077 [INFO][3708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.077 [INFO][3708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" iface="eth0" netns="/var/run/netns/cni-c29ef16b-53bd-b4e5-a72c-314fe61c1d17" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.108 [INFO][3708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" iface="eth0" netns="/var/run/netns/cni-c29ef16b-53bd-b4e5-a72c-314fe61c1d17" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.115 [INFO][3708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" iface="eth0" netns="/var/run/netns/cni-c29ef16b-53bd-b4e5-a72c-314fe61c1d17" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.115 [INFO][3708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.115 [INFO][3708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.364 [INFO][3779] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.366 [INFO][3779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.370 [INFO][3779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.388 [WARNING][3779] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.388 [INFO][3779] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.390 [INFO][3779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.431132 containerd[1457]: 2026-03-07 01:07:23.419 [INFO][3708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:23.431132 containerd[1457]: time="2026-03-07T01:07:23.426123003Z" level=info msg="TearDown network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\" successfully" Mar 7 01:07:23.431132 containerd[1457]: time="2026-03-07T01:07:23.426144363Z" level=info msg="StopPodSandbox for \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\" returns successfully" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.256 [INFO][3683] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.258 [INFO][3683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" iface="eth0" netns="/var/run/netns/cni-3099c5d7-c5cb-e2a4-3abb-6a037b56085a" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.258 [INFO][3683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" iface="eth0" netns="/var/run/netns/cni-3099c5d7-c5cb-e2a4-3abb-6a037b56085a" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.258 [INFO][3683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" iface="eth0" netns="/var/run/netns/cni-3099c5d7-c5cb-e2a4-3abb-6a037b56085a" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.258 [INFO][3683] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.258 [INFO][3683] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.438 [INFO][3832] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.438 [INFO][3832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.438 [INFO][3832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.444 [WARNING][3832] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.444 [INFO][3832] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.445 [INFO][3832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.449795 containerd[1457]: 2026-03-07 01:07:23.448 [INFO][3683] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:23.450300 containerd[1457]: time="2026-03-07T01:07:23.450240229Z" level=info msg="TearDown network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\" successfully" Mar 7 01:07:23.450367 containerd[1457]: time="2026-03-07T01:07:23.450352429Z" level=info msg="StopPodSandbox for \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\" returns successfully" Mar 7 01:07:23.451131 containerd[1457]: time="2026-03-07T01:07:23.451113142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fcdd589f-7w9zs,Uid:b2c5388f-e7db-4945-8a7f-ca5fddbf9992,Namespace:calico-system,Attempt:1,}" Mar 7 01:07:23.498184 kubelet[2539]: I0307 01:07:23.498154 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhd7w\" (UniqueName: \"kubernetes.io/projected/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-kube-api-access-lhd7w\") pod \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " Mar 7 01:07:23.502926 kubelet[2539]: I0307 01:07:23.502392 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-nginx-config\") pod \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " Mar 7 01:07:23.502926 kubelet[2539]: I0307 01:07:23.502431 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-backend-key-pair\") pod \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " Mar 7 01:07:23.502926 kubelet[2539]: I0307 01:07:23.502455 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-ca-bundle\") pod \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\" (UID: \"f81e418a-c0cc-43f5-9c0e-efe353ae4eca\") " Mar 7 01:07:23.502926 kubelet[2539]: I0307 01:07:23.502894 2539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f81e418a-c0cc-43f5-9c0e-efe353ae4eca" (UID: "f81e418a-c0cc-43f5-9c0e-efe353ae4eca"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:07:23.503903 kubelet[2539]: I0307 01:07:23.503845 2539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "f81e418a-c0cc-43f5-9c0e-efe353ae4eca" (UID: "f81e418a-c0cc-43f5-9c0e-efe353ae4eca"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:07:23.511026 kubelet[2539]: I0307 01:07:23.510986 2539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-kube-api-access-lhd7w" (OuterVolumeSpecName: "kube-api-access-lhd7w") pod "f81e418a-c0cc-43f5-9c0e-efe353ae4eca" (UID: "f81e418a-c0cc-43f5-9c0e-efe353ae4eca"). InnerVolumeSpecName "kube-api-access-lhd7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:07:23.515414 kubelet[2539]: I0307 01:07:23.515371 2539 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f81e418a-c0cc-43f5-9c0e-efe353ae4eca" (UID: "f81e418a-c0cc-43f5-9c0e-efe353ae4eca"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.317 [INFO][3743] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.317 [INFO][3743] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" iface="eth0" netns="/var/run/netns/cni-6326d019-4ee8-5dbc-349e-662ce2a1edc9" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.321 [INFO][3743] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" iface="eth0" netns="/var/run/netns/cni-6326d019-4ee8-5dbc-349e-662ce2a1edc9" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.323 [INFO][3743] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" iface="eth0" netns="/var/run/netns/cni-6326d019-4ee8-5dbc-349e-662ce2a1edc9" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.323 [INFO][3743] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.323 [INFO][3743] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.484 [INFO][3852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.484 [INFO][3852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.484 [INFO][3852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.534 [WARNING][3852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.534 [INFO][3852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.546 [INFO][3852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.566309 containerd[1457]: 2026-03-07 01:07:23.554 [INFO][3743] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:23.567371 containerd[1457]: time="2026-03-07T01:07:23.567191416Z" level=info msg="TearDown network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\" successfully" Mar 7 01:07:23.567371 containerd[1457]: time="2026-03-07T01:07:23.567222126Z" level=info msg="StopPodSandbox for \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\" returns successfully" Mar 7 01:07:23.569711 containerd[1457]: time="2026-03-07T01:07:23.569657857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pg6fp,Uid:2754896e-11fe-452b-a84a-c172f3237c2d,Namespace:calico-system,Attempt:1,}" Mar 7 01:07:23.602904 kubelet[2539]: I0307 01:07:23.602779 2539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lhd7w\" (UniqueName: \"kubernetes.io/projected/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-kube-api-access-lhd7w\") on node \"172-239-198-121\" DevicePath \"\"" Mar 7 01:07:23.602904 kubelet[2539]: I0307 01:07:23.602809 2539 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-nginx-config\") on node \"172-239-198-121\" DevicePath \"\"" Mar 7 01:07:23.602904 kubelet[2539]: I0307 01:07:23.602822 2539 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-backend-key-pair\") on node \"172-239-198-121\" DevicePath \"\"" Mar 7 01:07:23.602904 kubelet[2539]: I0307 01:07:23.602834 2539 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81e418a-c0cc-43f5-9c0e-efe353ae4eca-whisker-ca-bundle\") on node \"172-239-198-121\" DevicePath \"\"" Mar 7 01:07:23.618474 systemd-networkd[1382]: cali0890953b440: Link UP Mar 7 01:07:23.619658 systemd-networkd[1382]: cali0890953b440: Gained carrier Mar 7 01:07:23.684225 systemd[1]: run-netns-cni\x2d3099c5d7\x2dc5cb\x2de2a4\x2d3abb\x2d6a037b56085a.mount: Deactivated successfully. Mar 7 01:07:23.684543 systemd[1]: run-netns-cni\x2d6326d019\x2d4ee8\x2d5dbc\x2d349e\x2d662ce2a1edc9.mount: Deactivated successfully. Mar 7 01:07:23.684620 systemd[1]: run-netns-cni\x2daed43c14\x2d6cc0\x2d70dc\x2dcddd\x2ddd8e4a6e5add.mount: Deactivated successfully. Mar 7 01:07:23.684687 systemd[1]: run-netns-cni\x2d51fd9f00\x2d2033\x2d8542\x2d81b9\x2d6fccf0692b77.mount: Deactivated successfully. Mar 7 01:07:23.684753 systemd[1]: run-netns-cni\x2d8708fedb\x2db116\x2d19b2\x2d8a0d\x2de51c3561defb.mount: Deactivated successfully. Mar 7 01:07:23.684817 systemd[1]: run-netns-cni\x2dc29ef16b\x2d53bd\x2db4e5\x2da72c\x2d314fe61c1d17.mount: Deactivated successfully. Mar 7 01:07:23.684886 systemd[1]: var-lib-kubelet-pods-f81e418a\x2dc0cc\x2d43f5\x2d9c0e\x2defe353ae4eca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhd7w.mount: Deactivated successfully. Mar 7 01:07:23.684961 systemd[1]: var-lib-kubelet-pods-f81e418a\x2dc0cc\x2d43f5\x2d9c0e\x2defe353ae4eca-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.325 [ERROR][3799] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.368 [INFO][3799] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0 coredns-674b8bbfcf- kube-system ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3 889 0 2026-03-07 01:07:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-198-121 coredns-674b8bbfcf-9wt7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0890953b440 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.368 [INFO][3799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.540 [INFO][3871] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" HandleID="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.552 [INFO][3871] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" HandleID="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efc70), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-198-121", "pod":"coredns-674b8bbfcf-9wt7m", "timestamp":"2026-03-07 01:07:23.540452881 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ad760)} Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.552 [INFO][3871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.552 [INFO][3871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.552 [INFO][3871] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.555 [INFO][3871] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.560 [INFO][3871] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.567 [INFO][3871] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.572 [INFO][3871] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.575 [INFO][3871] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.575 [INFO][3871] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.578 [INFO][3871] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3 Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.588 [INFO][3871] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.597 [INFO][3871] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.194/26] block=192.168.26.192/26 handle="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.598 [INFO][3871] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.194/26] handle="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" host="172-239-198-121" Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.598 [INFO][3871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:23.692476 containerd[1457]: 2026-03-07 01:07:23.598 [INFO][3871] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.194/26] IPv6=[] ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" HandleID="k8s-pod-network.8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.692978 containerd[1457]: 2026-03-07 01:07:23.611 [INFO][3799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"coredns-674b8bbfcf-9wt7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0890953b440", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:23.692978 containerd[1457]: 2026-03-07 01:07:23.611 [INFO][3799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.194/32] ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.692978 containerd[1457]: 2026-03-07 01:07:23.611 [INFO][3799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0890953b440 ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.692978 containerd[1457]: 2026-03-07 01:07:23.625 [INFO][3799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.692978 containerd[1457]: 2026-03-07 01:07:23.630 [INFO][3799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3", Pod:"coredns-674b8bbfcf-9wt7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0890953b440", MAC:"06:de:f3:46:57:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:23.692978 containerd[1457]: 2026-03-07 01:07:23.662 [INFO][3799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-9wt7m" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:23.816303 systemd[1]: Removed slice kubepods-besteffort-podf81e418a_c0cc_43f5_9c0e_efe353ae4eca.slice - libcontainer container kubepods-besteffort-podf81e418a_c0cc_43f5_9c0e_efe353ae4eca.slice. Mar 7 01:07:23.877248 containerd[1457]: time="2026-03-07T01:07:23.877005004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:23.877248 containerd[1457]: time="2026-03-07T01:07:23.877060424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:23.877248 containerd[1457]: time="2026-03-07T01:07:23.877084194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:23.877248 containerd[1457]: time="2026-03-07T01:07:23.877171564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:23.933658 systemd-networkd[1382]: cali8bd6057a2ea: Link UP Mar 7 01:07:23.946484 systemd-networkd[1382]: cali8bd6057a2ea: Gained carrier Mar 7 01:07:23.962563 systemd[1]: run-containerd-runc-k8s.io-8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3-runc.PoetRW.mount: Deactivated successfully. Mar 7 01:07:23.978698 systemd[1]: Started cri-containerd-8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3.scope - libcontainer container 8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3. Mar 7 01:07:24.005521 systemd-networkd[1382]: calib61dc450fc7: Link UP Mar 7 01:07:24.009243 systemd-networkd[1382]: calib61dc450fc7: Gained carrier Mar 7 01:07:24.011375 kubelet[2539]: I0307 01:07:24.010133 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.488 [ERROR][3905] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.517 [INFO][3905] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0 calico-apiserver-7fd8959695- calico-system 1cf94d0a-9aa0-4302-a083-681de00390a5 893 0 2026-03-07 01:07:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd8959695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-198-121 calico-apiserver-7fd8959695-wdzzd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8bd6057a2ea [] [] }} ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.518 [INFO][3905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.751 [INFO][3956] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" HandleID="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.772 [INFO][3956] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" HandleID="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe20), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-198-121", "pod":"calico-apiserver-7fd8959695-wdzzd", "timestamp":"2026-03-07 01:07:23.751627989 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002ff760)} Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.772 [INFO][3956] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.772 [INFO][3956] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.772 [INFO][3956] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.779 [INFO][3956] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.793 [INFO][3956] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.809 [INFO][3956] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.816 [INFO][3956] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.824 [INFO][3956] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.824 [INFO][3956] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.827 [INFO][3956] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76 Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.840 [INFO][3956] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.851 [INFO][3956] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.195/26] block=192.168.26.192/26 handle="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.851 [INFO][3956] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.195/26] handle="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" host="172-239-198-121" Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.851 [INFO][3956] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:24.040634 containerd[1457]: 2026-03-07 01:07:23.851 [INFO][3956] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.195/26] IPv6=[] ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" HandleID="k8s-pod-network.e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:24.041160 containerd[1457]: 2026-03-07 01:07:23.888 [INFO][3905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"1cf94d0a-9aa0-4302-a083-681de00390a5", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"calico-apiserver-7fd8959695-wdzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8bd6057a2ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.041160 containerd[1457]: 2026-03-07 01:07:23.889 [INFO][3905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.195/32] ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:24.041160 containerd[1457]: 2026-03-07 01:07:23.889 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bd6057a2ea ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:24.041160 containerd[1457]: 2026-03-07 01:07:23.974 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:24.041160 containerd[1457]: 2026-03-07 01:07:23.976 [INFO][3905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"1cf94d0a-9aa0-4302-a083-681de00390a5", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76", Pod:"calico-apiserver-7fd8959695-wdzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8bd6057a2ea", MAC:"82:c1:c9:42:69:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.041160 containerd[1457]: 2026-03-07 01:07:24.013 [INFO][3905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-wdzzd" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.574 [ERROR][3917] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.599 [INFO][3917] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0 calico-apiserver-7fd8959695- calico-system ec64a8fa-f04f-42f8-b5ba-1f4af3044695 894 0 2026-03-07 01:07:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd8959695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-198-121 calico-apiserver-7fd8959695-r25tl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib61dc450fc7 [] [] }} ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.600 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.793 [INFO][3979] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" HandleID="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.819 [INFO][3979] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" HandleID="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c1ce0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-198-121", "pod":"calico-apiserver-7fd8959695-r25tl", "timestamp":"2026-03-07 01:07:23.793013129 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000236dc0)} Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.819 [INFO][3979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.857 [INFO][3979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.857 [INFO][3979] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.880 [INFO][3979] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.911 [INFO][3979] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.921 [INFO][3979] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.925 [INFO][3979] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.929 [INFO][3979] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.930 [INFO][3979] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.940 [INFO][3979] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100 Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.969 [INFO][3979] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.993 [INFO][3979] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.196/26] block=192.168.26.192/26 handle="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.993 [INFO][3979] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.196/26] handle="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" host="172-239-198-121" Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.993 [INFO][3979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:24.082601 containerd[1457]: 2026-03-07 01:07:23.993 [INFO][3979] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.196/26] IPv6=[] ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" HandleID="k8s-pod-network.9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:24.083144 containerd[1457]: 2026-03-07 01:07:24.000 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"ec64a8fa-f04f-42f8-b5ba-1f4af3044695", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"calico-apiserver-7fd8959695-r25tl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib61dc450fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.083144 containerd[1457]: 2026-03-07 01:07:24.000 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.196/32] ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:24.083144 containerd[1457]: 2026-03-07 01:07:24.000 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib61dc450fc7 ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:24.083144 containerd[1457]: 2026-03-07 01:07:24.015 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:24.083144 containerd[1457]: 2026-03-07 01:07:24.022 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"ec64a8fa-f04f-42f8-b5ba-1f4af3044695", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100", Pod:"calico-apiserver-7fd8959695-r25tl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib61dc450fc7", MAC:"2e:24:6a:4b:cb:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.083144 containerd[1457]: 2026-03-07 01:07:24.066 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100" Namespace="calico-system" Pod="calico-apiserver-7fd8959695-r25tl" WorkloadEndpoint="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:24.101350 systemd-networkd[1382]: cali1f5cdc19fb9: Gained IPv6LL Mar 7 01:07:24.145400 containerd[1457]: time="2026-03-07T01:07:24.144676847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9wt7m,Uid:ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3,Namespace:kube-system,Attempt:1,} returns sandbox id \"8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3\"" Mar 7 01:07:24.150296 kubelet[2539]: E0307 01:07:24.149982 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:24.153862 containerd[1457]: time="2026-03-07T01:07:24.153738245Z" level=info msg="CreateContainer within sandbox \"8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:07:24.170295 containerd[1457]: time="2026-03-07T01:07:24.169197140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:24.170295 containerd[1457]: time="2026-03-07T01:07:24.169252860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:24.191818 containerd[1457]: time="2026-03-07T01:07:24.179801944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.191818 containerd[1457]: time="2026-03-07T01:07:24.183081808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.205220 systemd[1]: Created slice kubepods-besteffort-podce1b301f_ab84_4fa2_8657_dbd84985f6a4.slice - libcontainer container kubepods-besteffort-podce1b301f_ab84_4fa2_8657_dbd84985f6a4.slice. Mar 7 01:07:24.229885 containerd[1457]: time="2026-03-07T01:07:24.209908390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:24.229885 containerd[1457]: time="2026-03-07T01:07:24.209952092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:24.229885 containerd[1457]: time="2026-03-07T01:07:24.209986872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.229885 containerd[1457]: time="2026-03-07T01:07:24.210070832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.238429 systemd[1]: Started cri-containerd-e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76.scope - libcontainer container e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76. Mar 7 01:07:24.245203 systemd-networkd[1382]: caliceb3b20c027: Link UP Mar 7 01:07:24.245488 systemd-networkd[1382]: caliceb3b20c027: Gained carrier Mar 7 01:07:24.262704 systemd[1]: Started cri-containerd-9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100.scope - libcontainer container 9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100. Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.604 [ERROR][3864] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.655 [INFO][3864] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0 coredns-674b8bbfcf- kube-system fc0c6376-e9a6-43ac-9b83-1647076d0c22 890 0 2026-03-07 01:07:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-198-121 coredns-674b8bbfcf-shk8j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliceb3b20c027 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.655 [INFO][3864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.839 [INFO][4022] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" HandleID="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.862 [INFO][4022] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" HandleID="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d4070), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-198-121", "pod":"coredns-674b8bbfcf-shk8j", "timestamp":"2026-03-07 01:07:23.839879993 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d7b80)} Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.862 [INFO][4022] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.994 [INFO][4022] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:23.995 [INFO][4022] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.019 [INFO][4022] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.061 [INFO][4022] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.080 [INFO][4022] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.085 [INFO][4022] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.097 [INFO][4022] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.101 [INFO][4022] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.109 [INFO][4022] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.142 [INFO][4022] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.180 [INFO][4022] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.197/26] block=192.168.26.192/26 handle="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.182 [INFO][4022] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.197/26] handle="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" host="172-239-198-121" Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.186 [INFO][4022] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:24.286774 containerd[1457]: 2026-03-07 01:07:24.186 [INFO][4022] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.197/26] IPv6=[] ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" HandleID="k8s-pod-network.31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:24.288500 containerd[1457]: 2026-03-07 01:07:24.230 [INFO][3864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc0c6376-e9a6-43ac-9b83-1647076d0c22", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"coredns-674b8bbfcf-shk8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb3b20c027", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.288500 containerd[1457]: 2026-03-07 01:07:24.230 [INFO][3864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.197/32] ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:24.288500 containerd[1457]: 2026-03-07 01:07:24.230 [INFO][3864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceb3b20c027 ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:24.288500 containerd[1457]: 2026-03-07 01:07:24.241 [INFO][3864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:24.288500 containerd[1457]: 2026-03-07 01:07:24.246 [INFO][3864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc0c6376-e9a6-43ac-9b83-1647076d0c22", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb", Pod:"coredns-674b8bbfcf-shk8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb3b20c027", MAC:"4e:bd:d5:16:51:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.288500 containerd[1457]: 2026-03-07 01:07:24.279 [INFO][3864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb" Namespace="kube-system" Pod="coredns-674b8bbfcf-shk8j" WorkloadEndpoint="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:24.305917 containerd[1457]: time="2026-03-07T01:07:24.305525243Z" level=info msg="CreateContainer within sandbox \"8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37556fcf5098384bb531a6c55b55c81ccf19bd1682fd25700758075f8c42c247\"" Mar 7 01:07:24.307792 containerd[1457]: time="2026-03-07T01:07:24.307713312Z" level=info msg="StartContainer for \"37556fcf5098384bb531a6c55b55c81ccf19bd1682fd25700758075f8c42c247\"" Mar 7 01:07:24.309045 kubelet[2539]: I0307 01:07:24.308882 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1b301f-ab84-4fa2-8657-dbd84985f6a4-whisker-ca-bundle\") pod \"whisker-f857b6df5-v5xkq\" (UID: \"ce1b301f-ab84-4fa2-8657-dbd84985f6a4\") " pod="calico-system/whisker-f857b6df5-v5xkq" Mar 7 01:07:24.309045 kubelet[2539]: I0307 01:07:24.308925 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ce1b301f-ab84-4fa2-8657-dbd84985f6a4-nginx-config\") pod \"whisker-f857b6df5-v5xkq\" (UID: \"ce1b301f-ab84-4fa2-8657-dbd84985f6a4\") " pod="calico-system/whisker-f857b6df5-v5xkq" Mar 7 01:07:24.309045 kubelet[2539]: I0307 01:07:24.308980 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5k7p\" (UniqueName: \"kubernetes.io/projected/ce1b301f-ab84-4fa2-8657-dbd84985f6a4-kube-api-access-k5k7p\") pod \"whisker-f857b6df5-v5xkq\" (UID: \"ce1b301f-ab84-4fa2-8657-dbd84985f6a4\") " pod="calico-system/whisker-f857b6df5-v5xkq" Mar 7 01:07:24.309045 kubelet[2539]: I0307 01:07:24.309008 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce1b301f-ab84-4fa2-8657-dbd84985f6a4-whisker-backend-key-pair\") pod \"whisker-f857b6df5-v5xkq\" (UID: \"ce1b301f-ab84-4fa2-8657-dbd84985f6a4\") " pod="calico-system/whisker-f857b6df5-v5xkq" Mar 7 01:07:24.367395 systemd[1]: Started cri-containerd-37556fcf5098384bb531a6c55b55c81ccf19bd1682fd25700758075f8c42c247.scope - libcontainer container 37556fcf5098384bb531a6c55b55c81ccf19bd1682fd25700758075f8c42c247. Mar 7 01:07:24.396509 systemd-networkd[1382]: calid521e0effb9: Link UP Mar 7 01:07:24.396788 systemd-networkd[1382]: calid521e0effb9: Gained carrier Mar 7 01:07:24.429536 containerd[1457]: time="2026-03-07T01:07:24.429426853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:24.430210 containerd[1457]: time="2026-03-07T01:07:24.429702134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:24.430210 containerd[1457]: time="2026-03-07T01:07:24.429717574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.435295 containerd[1457]: time="2026-03-07T01:07:24.431426641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.441334 containerd[1457]: time="2026-03-07T01:07:24.440346109Z" level=info msg="StartContainer for \"37556fcf5098384bb531a6c55b55c81ccf19bd1682fd25700758075f8c42c247\" returns successfully" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:23.597 [ERROR][3932] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:23.656 [INFO][3932] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0 calico-kube-controllers-84fcdd589f- calico-system b2c5388f-e7db-4945-8a7f-ca5fddbf9992 895 0 2026-03-07 01:07:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84fcdd589f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-198-121 calico-kube-controllers-84fcdd589f-7w9zs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid521e0effb9 [] [] }} ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:23.656 [INFO][3932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:23.955 [INFO][4025] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" HandleID="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.000 [INFO][4025] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" HandleID="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-198-121", "pod":"calico-kube-controllers-84fcdd589f-7w9zs", "timestamp":"2026-03-07 01:07:23.955586626 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000310420)} Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.000 [INFO][4025] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.196 [INFO][4025] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.196 [INFO][4025] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.271 [INFO][4025] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.293 [INFO][4025] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.298 [INFO][4025] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.301 [INFO][4025] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.305 [INFO][4025] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.306 [INFO][4025] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.321 [INFO][4025] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.339 [INFO][4025] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.369 [INFO][4025] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.198/26] block=192.168.26.192/26 handle="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.369 [INFO][4025] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.198/26] handle="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" host="172-239-198-121" Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.369 [INFO][4025] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:24.468822 containerd[1457]: 2026-03-07 01:07:24.369 [INFO][4025] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.198/26] IPv6=[] ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" HandleID="k8s-pod-network.29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:24.469437 containerd[1457]: 2026-03-07 01:07:24.385 [INFO][3932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0", GenerateName:"calico-kube-controllers-84fcdd589f-", Namespace:"calico-system", SelfLink:"", UID:"b2c5388f-e7db-4945-8a7f-ca5fddbf9992", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fcdd589f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"calico-kube-controllers-84fcdd589f-7w9zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid521e0effb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.469437 containerd[1457]: 2026-03-07 01:07:24.386 [INFO][3932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.198/32] ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:24.469437 containerd[1457]: 2026-03-07 01:07:24.386 [INFO][3932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid521e0effb9 ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:24.469437 containerd[1457]: 2026-03-07 01:07:24.404 [INFO][3932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:24.469437 containerd[1457]: 2026-03-07 01:07:24.411 [INFO][3932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0", GenerateName:"calico-kube-controllers-84fcdd589f-", Namespace:"calico-system", SelfLink:"", UID:"b2c5388f-e7db-4945-8a7f-ca5fddbf9992", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fcdd589f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d", Pod:"calico-kube-controllers-84fcdd589f-7w9zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid521e0effb9", MAC:"a2:25:99:b5:c7:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.469437 containerd[1457]: 2026-03-07 01:07:24.460 [INFO][3932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d" Namespace="calico-system" Pod="calico-kube-controllers-84fcdd589f-7w9zs" WorkloadEndpoint="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:24.470476 systemd[1]: Started cri-containerd-31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb.scope - libcontainer container 31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb. Mar 7 01:07:24.524191 containerd[1457]: time="2026-03-07T01:07:24.524143461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f857b6df5-v5xkq,Uid:ce1b301f-ab84-4fa2-8657-dbd84985f6a4,Namespace:calico-system,Attempt:0,}" Mar 7 01:07:24.542926 containerd[1457]: time="2026-03-07T01:07:24.541125463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:24.542926 containerd[1457]: time="2026-03-07T01:07:24.541185223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:24.542926 containerd[1457]: time="2026-03-07T01:07:24.541198723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.542926 containerd[1457]: time="2026-03-07T01:07:24.541297443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.598546 systemd[1]: Started cri-containerd-29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d.scope - libcontainer container 29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d. Mar 7 01:07:24.614988 systemd-networkd[1382]: cali205253efd32: Link UP Mar 7 01:07:24.620285 systemd-networkd[1382]: cali205253efd32: Gained carrier Mar 7 01:07:24.665892 containerd[1457]: time="2026-03-07T01:07:24.663753638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shk8j,Uid:fc0c6376-e9a6-43ac-9b83-1647076d0c22,Namespace:kube-system,Attempt:1,} returns sandbox id \"31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb\"" Mar 7 01:07:24.676285 kubelet[2539]: E0307 01:07:24.672060 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:23.901 [ERROR][3977] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:23.977 [INFO][3977] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0 goldmane-5b85766d88- calico-system 2754896e-11fe-452b-a84a-c172f3237c2d 896 0 2026-03-07 01:07:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-198-121 goldmane-5b85766d88-pg6fp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali205253efd32 [] [] }} ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:23.978 [INFO][3977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.335 [INFO][4103] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" HandleID="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.376 [INFO][4103] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" HandleID="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a5cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-198-121", "pod":"goldmane-5b85766d88-pg6fp", "timestamp":"2026-03-07 01:07:24.335052347 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000114dc0)} Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.383 [INFO][4103] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.383 [INFO][4103] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.383 [INFO][4103] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.405 [INFO][4103] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.451 [INFO][4103] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.481 [INFO][4103] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.501 [INFO][4103] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.512 [INFO][4103] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.512 [INFO][4103] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.521 [INFO][4103] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628 Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.532 [INFO][4103] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.570 [INFO][4103] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.199/26] block=192.168.26.192/26 handle="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.571 [INFO][4103] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.199/26] handle="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" host="172-239-198-121" Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.571 [INFO][4103] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:24.685509 containerd[1457]: 2026-03-07 01:07:24.571 [INFO][4103] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.199/26] IPv6=[] ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" HandleID="k8s-pod-network.659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:24.686005 containerd[1457]: 2026-03-07 01:07:24.598 [INFO][3977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2754896e-11fe-452b-a84a-c172f3237c2d", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"goldmane-5b85766d88-pg6fp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali205253efd32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.686005 containerd[1457]: 2026-03-07 01:07:24.598 [INFO][3977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.199/32] ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:24.686005 containerd[1457]: 2026-03-07 01:07:24.598 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali205253efd32 ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:24.686005 containerd[1457]: 2026-03-07 01:07:24.618 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:24.686005 containerd[1457]: 2026-03-07 01:07:24.623 [INFO][3977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2754896e-11fe-452b-a84a-c172f3237c2d", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628", Pod:"goldmane-5b85766d88-pg6fp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali205253efd32", MAC:"e6:5d:c3:00:6f:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:24.686005 containerd[1457]: 2026-03-07 01:07:24.674 [INFO][3977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628" Namespace="calico-system" Pod="goldmane-5b85766d88-pg6fp" WorkloadEndpoint="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:24.694772 containerd[1457]: time="2026-03-07T01:07:24.694634318Z" level=info msg="CreateContainer within sandbox \"31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:07:24.733248 containerd[1457]: time="2026-03-07T01:07:24.731313482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:24.733248 containerd[1457]: time="2026-03-07T01:07:24.731420042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:24.733248 containerd[1457]: time="2026-03-07T01:07:24.731446572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.733248 containerd[1457]: time="2026-03-07T01:07:24.731536793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:24.777633 containerd[1457]: time="2026-03-07T01:07:24.777585056Z" level=info msg="CreateContainer within sandbox \"31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d455e543cccc99f90c196f7e2bbaff71b5c9c14cd87712832629892bd19b73a\"" Mar 7 01:07:24.781098 containerd[1457]: time="2026-03-07T01:07:24.781058091Z" level=info msg="StartContainer for \"7d455e543cccc99f90c196f7e2bbaff71b5c9c14cd87712832629892bd19b73a\"" Mar 7 01:07:24.800422 systemd[1]: Started cri-containerd-659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628.scope - libcontainer container 659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628. Mar 7 01:07:24.837928 systemd[1]: Started cri-containerd-7d455e543cccc99f90c196f7e2bbaff71b5c9c14cd87712832629892bd19b73a.scope - libcontainer container 7d455e543cccc99f90c196f7e2bbaff71b5c9c14cd87712832629892bd19b73a. Mar 7 01:07:24.841634 containerd[1457]: time="2026-03-07T01:07:24.841594645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-wdzzd,Uid:1cf94d0a-9aa0-4302-a083-681de00390a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76\"" Mar 7 01:07:24.895353 containerd[1457]: time="2026-03-07T01:07:24.894723058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd8959695-r25tl,Uid:ec64a8fa-f04f-42f8-b5ba-1f4af3044695,Namespace:calico-system,Attempt:1,} returns sandbox id \"9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100\"" Mar 7 01:07:24.941718 containerd[1457]: time="2026-03-07T01:07:24.941519275Z" level=info msg="StartContainer for \"7d455e543cccc99f90c196f7e2bbaff71b5c9c14cd87712832629892bd19b73a\" returns successfully" Mar 7 01:07:24.976853 containerd[1457]: time="2026-03-07T01:07:24.976803743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fcdd589f-7w9zs,Uid:b2c5388f-e7db-4945-8a7f-ca5fddbf9992,Namespace:calico-system,Attempt:1,} returns sandbox id \"29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d\"" Mar 7 01:07:25.033549 kubelet[2539]: E0307 01:07:25.032869 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:25.036982 kubelet[2539]: E0307 01:07:25.036520 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:25.039696 containerd[1457]: time="2026-03-07T01:07:25.038821119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:25.040366 containerd[1457]: time="2026-03-07T01:07:25.040316545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:07:25.043189 containerd[1457]: time="2026-03-07T01:07:25.043164187Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:25.045343 containerd[1457]: time="2026-03-07T01:07:25.045321895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:25.046909 containerd[1457]: time="2026-03-07T01:07:25.046888301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.643756678s" Mar 7 01:07:25.046985 containerd[1457]: time="2026-03-07T01:07:25.046970562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:07:25.048410 containerd[1457]: time="2026-03-07T01:07:25.048392527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:07:25.051118 containerd[1457]: time="2026-03-07T01:07:25.051097218Z" level=info msg="CreateContainer within sandbox \"90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:07:25.056518 containerd[1457]: time="2026-03-07T01:07:25.056467980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pg6fp,Uid:2754896e-11fe-452b-a84a-c172f3237c2d,Namespace:calico-system,Attempt:1,} returns sandbox id \"659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628\"" Mar 7 01:07:25.060727 kubelet[2539]: I0307 01:07:25.060602 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-shk8j" podStartSLOduration=24.060586417 podStartE2EDuration="24.060586417s" podCreationTimestamp="2026-03-07 01:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:07:25.059711974 +0000 UTC m=+31.365899221" watchObservedRunningTime="2026-03-07 01:07:25.060586417 +0000 UTC m=+31.366773664" Mar 7 01:07:25.067633 containerd[1457]: time="2026-03-07T01:07:25.067558165Z" level=info msg="CreateContainer within sandbox \"90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f88751b3ce3db24c8128b341ed57413da1ff426ce9452ce8bc475be9860a9e13\"" Mar 7 01:07:25.068217 containerd[1457]: time="2026-03-07T01:07:25.068184818Z" level=info msg="StartContainer for \"f88751b3ce3db24c8128b341ed57413da1ff426ce9452ce8bc475be9860a9e13\"" Mar 7 01:07:25.123235 systemd-networkd[1382]: calib61dc450fc7: Gained IPv6LL Mar 7 01:07:25.127709 systemd[1]: Started cri-containerd-f88751b3ce3db24c8128b341ed57413da1ff426ce9452ce8bc475be9860a9e13.scope - libcontainer container f88751b3ce3db24c8128b341ed57413da1ff426ce9452ce8bc475be9860a9e13. Mar 7 01:07:25.143224 kubelet[2539]: I0307 01:07:25.141389 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9wt7m" podStartSLOduration=24.141375035 podStartE2EDuration="24.141375035s" podCreationTimestamp="2026-03-07 01:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:07:25.100968001 +0000 UTC m=+31.407155238" watchObservedRunningTime="2026-03-07 01:07:25.141375035 +0000 UTC m=+31.447562272" Mar 7 01:07:25.165480 systemd-networkd[1382]: cali269cc3a4cef: Link UP Mar 7 01:07:25.167328 systemd-networkd[1382]: cali269cc3a4cef: Gained carrier Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.611 [ERROR][4302] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.678 [INFO][4302] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0 whisker-f857b6df5- calico-system ce1b301f-ab84-4fa2-8657-dbd84985f6a4 928 0 2026-03-07 01:07:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f857b6df5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-198-121 whisker-f857b6df5-v5xkq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali269cc3a4cef [] [] }} ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.678 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.943 [INFO][4346] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" HandleID="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Workload="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.964 [INFO][4346] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" HandleID="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Workload="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000386d00), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-198-121", "pod":"whisker-f857b6df5-v5xkq", "timestamp":"2026-03-07 01:07:24.943906165 +0000 UTC"}, Hostname:"172-239-198-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000224c60)} Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.964 [INFO][4346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.964 [INFO][4346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.965 [INFO][4346] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-198-121' Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:24.971 [INFO][4346] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.038 [INFO][4346] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.050 [INFO][4346] ipam/ipam.go 526: Trying affinity for 192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.057 [INFO][4346] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.073 [INFO][4346] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.192/26 host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.073 [INFO][4346] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.192/26 handle="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.086 [INFO][4346] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8 Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.094 [INFO][4346] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.192/26 handle="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.128 [INFO][4346] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.200/26] block=192.168.26.192/26 handle="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.128 [INFO][4346] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.200/26] handle="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" host="172-239-198-121" Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.128 [INFO][4346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:25.182754 containerd[1457]: 2026-03-07 01:07:25.128 [INFO][4346] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.200/26] IPv6=[] ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" HandleID="k8s-pod-network.4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Workload="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" Mar 7 01:07:25.183222 containerd[1457]: 2026-03-07 01:07:25.146 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0", GenerateName:"whisker-f857b6df5-", Namespace:"calico-system", SelfLink:"", UID:"ce1b301f-ab84-4fa2-8657-dbd84985f6a4", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f857b6df5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"", Pod:"whisker-f857b6df5-v5xkq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali269cc3a4cef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:25.183222 containerd[1457]: 2026-03-07 01:07:25.146 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.200/32] ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" Mar 7 01:07:25.183222 containerd[1457]: 2026-03-07 01:07:25.147 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali269cc3a4cef ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" Mar 7 01:07:25.183222 containerd[1457]: 2026-03-07 01:07:25.165 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" Mar 7 01:07:25.183222 containerd[1457]: 2026-03-07 01:07:25.165 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0", GenerateName:"whisker-f857b6df5-", Namespace:"calico-system", SelfLink:"", UID:"ce1b301f-ab84-4fa2-8657-dbd84985f6a4", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f857b6df5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8", Pod:"whisker-f857b6df5-v5xkq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali269cc3a4cef", MAC:"8e:bd:61:96:bd:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:25.183222 containerd[1457]: 2026-03-07 01:07:25.176 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8" Namespace="calico-system" Pod="whisker-f857b6df5-v5xkq" WorkloadEndpoint="172--239--198--121-k8s-whisker--f857b6df5--v5xkq-eth0" Mar 7 01:07:25.186937 systemd-networkd[1382]: cali0890953b440: Gained IPv6LL Mar 7 01:07:25.211995 containerd[1457]: time="2026-03-07T01:07:25.211871182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:07:25.211995 containerd[1457]: time="2026-03-07T01:07:25.211957212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:07:25.211995 containerd[1457]: time="2026-03-07T01:07:25.211972472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:25.212415 containerd[1457]: time="2026-03-07T01:07:25.212225534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:07:25.234469 systemd[1]: Started cri-containerd-4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8.scope - libcontainer container 4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8. Mar 7 01:07:25.284476 containerd[1457]: time="2026-03-07T01:07:25.282619349Z" level=info msg="StartContainer for \"f88751b3ce3db24c8128b341ed57413da1ff426ce9452ce8bc475be9860a9e13\" returns successfully" Mar 7 01:07:25.383776 containerd[1457]: time="2026-03-07T01:07:25.383473239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f857b6df5-v5xkq,Uid:ce1b301f-ab84-4fa2-8657-dbd84985f6a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8\"" Mar 7 01:07:25.388679 kernel: calico-node[4032]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:07:25.571414 systemd-networkd[1382]: cali8bd6057a2ea: Gained IPv6LL Mar 7 01:07:25.793326 kubelet[2539]: I0307 01:07:25.793286 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81e418a-c0cc-43f5-9c0e-efe353ae4eca" path="/var/lib/kubelet/pods/f81e418a-c0cc-43f5-9c0e-efe353ae4eca/volumes" Mar 7 01:07:25.981059 systemd-networkd[1382]: vxlan.calico: Link UP Mar 7 01:07:25.981073 systemd-networkd[1382]: vxlan.calico: Gained carrier Mar 7 01:07:26.046412 kubelet[2539]: E0307 01:07:26.044968 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:26.046412 kubelet[2539]: E0307 01:07:26.045286 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:26.082458 systemd-networkd[1382]: caliceb3b20c027: Gained IPv6LL Mar 7 01:07:26.210439 systemd-networkd[1382]: cali205253efd32: Gained IPv6LL Mar 7 01:07:26.274900 systemd-networkd[1382]: calid521e0effb9: Gained IPv6LL Mar 7 01:07:26.659638 systemd-networkd[1382]: cali269cc3a4cef: Gained IPv6LL Mar 7 01:07:27.047213 kubelet[2539]: E0307 01:07:27.047183 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:27.048460 kubelet[2539]: E0307 01:07:27.047769 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:07:27.132902 containerd[1457]: time="2026-03-07T01:07:27.132840956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:27.133929 containerd[1457]: time="2026-03-07T01:07:27.133668970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:07:27.134901 containerd[1457]: time="2026-03-07T01:07:27.134820834Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:27.138311 containerd[1457]: time="2026-03-07T01:07:27.136725062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:27.139071 containerd[1457]: time="2026-03-07T01:07:27.138993520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.090485882s" Mar 7 01:07:27.139071 containerd[1457]: time="2026-03-07T01:07:27.139031510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:07:27.141624 containerd[1457]: time="2026-03-07T01:07:27.141452489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:07:27.151379 containerd[1457]: time="2026-03-07T01:07:27.149897091Z" level=info msg="CreateContainer within sandbox \"e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:07:27.173158 containerd[1457]: time="2026-03-07T01:07:27.173119481Z" level=info msg="CreateContainer within sandbox \"e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e775f49078adc8b1e1c6e0cf67de9d0a42e157cd9aabf4b284bc3633a3283ba4\"" Mar 7 01:07:27.174193 containerd[1457]: time="2026-03-07T01:07:27.173775853Z" level=info msg="StartContainer for \"e775f49078adc8b1e1c6e0cf67de9d0a42e157cd9aabf4b284bc3633a3283ba4\"" Mar 7 01:07:27.218461 systemd[1]: Started cri-containerd-e775f49078adc8b1e1c6e0cf67de9d0a42e157cd9aabf4b284bc3633a3283ba4.scope - libcontainer container e775f49078adc8b1e1c6e0cf67de9d0a42e157cd9aabf4b284bc3633a3283ba4. Mar 7 01:07:27.260875 containerd[1457]: time="2026-03-07T01:07:27.260790925Z" level=info msg="StartContainer for \"e775f49078adc8b1e1c6e0cf67de9d0a42e157cd9aabf4b284bc3633a3283ba4\" returns successfully" Mar 7 01:07:27.318649 containerd[1457]: time="2026-03-07T01:07:27.318466014Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:27.319132 containerd[1457]: time="2026-03-07T01:07:27.318954176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:07:27.321975 containerd[1457]: time="2026-03-07T01:07:27.321937838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 180.456969ms" Mar 7 01:07:27.322024 containerd[1457]: time="2026-03-07T01:07:27.321978388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:07:27.323707 containerd[1457]: time="2026-03-07T01:07:27.323675335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:07:27.326661 containerd[1457]: time="2026-03-07T01:07:27.326633276Z" level=info msg="CreateContainer within sandbox \"9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:07:27.349179 containerd[1457]: time="2026-03-07T01:07:27.348925971Z" level=info msg="CreateContainer within sandbox \"9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"47930937a78d43d2cfc47519d59b4d9eb1f2e05a853951bb3b32483d7ab0c9f1\"" Mar 7 01:07:27.350717 containerd[1457]: time="2026-03-07T01:07:27.350680558Z" level=info msg="StartContainer for \"47930937a78d43d2cfc47519d59b4d9eb1f2e05a853951bb3b32483d7ab0c9f1\"" Mar 7 01:07:27.408427 systemd[1]: Started cri-containerd-47930937a78d43d2cfc47519d59b4d9eb1f2e05a853951bb3b32483d7ab0c9f1.scope - libcontainer container 47930937a78d43d2cfc47519d59b4d9eb1f2e05a853951bb3b32483d7ab0c9f1. Mar 7 01:07:27.462541 containerd[1457]: time="2026-03-07T01:07:27.462476584Z" level=info msg="StartContainer for \"47930937a78d43d2cfc47519d59b4d9eb1f2e05a853951bb3b32483d7ab0c9f1\" returns successfully" Mar 7 01:07:27.875441 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Mar 7 01:07:28.139794 kubelet[2539]: I0307 01:07:28.139628 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7fd8959695-wdzzd" podStartSLOduration=14.841989945 podStartE2EDuration="17.139614421s" podCreationTimestamp="2026-03-07 01:07:11 +0000 UTC" firstStartedPulling="2026-03-07 01:07:24.842968631 +0000 UTC m=+31.149155878" lastFinishedPulling="2026-03-07 01:07:27.140593117 +0000 UTC m=+33.446780354" observedRunningTime="2026-03-07 01:07:28.126557183 +0000 UTC m=+34.432744420" watchObservedRunningTime="2026-03-07 01:07:28.139614421 +0000 UTC m=+34.445801658" Mar 7 01:07:28.181721 kubelet[2539]: I0307 01:07:28.181673 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7fd8959695-r25tl" podStartSLOduration=14.772130681 podStartE2EDuration="17.181658556s" podCreationTimestamp="2026-03-07 01:07:11 +0000 UTC" firstStartedPulling="2026-03-07 01:07:24.913274856 +0000 UTC m=+31.219462093" lastFinishedPulling="2026-03-07 01:07:27.322802731 +0000 UTC m=+33.628989968" observedRunningTime="2026-03-07 01:07:28.178737935 +0000 UTC m=+34.484925202" watchObservedRunningTime="2026-03-07 01:07:28.181658556 +0000 UTC m=+34.487845793" Mar 7 01:07:29.094334 kubelet[2539]: I0307 01:07:29.094175 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:29.095352 kubelet[2539]: I0307 01:07:29.094177 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:29.162170 containerd[1457]: time="2026-03-07T01:07:29.161491955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:29.163240 containerd[1457]: time="2026-03-07T01:07:29.163156042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:07:29.164056 containerd[1457]: time="2026-03-07T01:07:29.164021034Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:29.166622 containerd[1457]: time="2026-03-07T01:07:29.166585974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:29.168669 containerd[1457]: time="2026-03-07T01:07:29.167100846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.843391331s" Mar 7 01:07:29.168669 containerd[1457]: time="2026-03-07T01:07:29.167434976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:07:29.168669 containerd[1457]: time="2026-03-07T01:07:29.168588371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:07:29.193070 containerd[1457]: time="2026-03-07T01:07:29.193031308Z" level=info msg="CreateContainer within sandbox \"29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:07:29.207930 containerd[1457]: time="2026-03-07T01:07:29.206616588Z" level=info msg="CreateContainer within sandbox \"29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"866280a7cb9d44ffa38359c7c3cdeb98d4270a8e07781d43d017bd44ea9ec45d\"" Mar 7 01:07:29.209381 containerd[1457]: time="2026-03-07T01:07:29.208950346Z" level=info msg="StartContainer for \"866280a7cb9d44ffa38359c7c3cdeb98d4270a8e07781d43d017bd44ea9ec45d\"" Mar 7 01:07:29.211446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488248758.mount: Deactivated successfully. Mar 7 01:07:29.260484 systemd[1]: Started cri-containerd-866280a7cb9d44ffa38359c7c3cdeb98d4270a8e07781d43d017bd44ea9ec45d.scope - libcontainer container 866280a7cb9d44ffa38359c7c3cdeb98d4270a8e07781d43d017bd44ea9ec45d. Mar 7 01:07:29.352039 containerd[1457]: time="2026-03-07T01:07:29.351928650Z" level=info msg="StartContainer for \"866280a7cb9d44ffa38359c7c3cdeb98d4270a8e07781d43d017bd44ea9ec45d\" returns successfully" Mar 7 01:07:30.104606 kubelet[2539]: I0307 01:07:30.103576 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84fcdd589f-7w9zs" podStartSLOduration=13.916174542 podStartE2EDuration="18.103560072s" podCreationTimestamp="2026-03-07 01:07:12 +0000 UTC" firstStartedPulling="2026-03-07 01:07:24.9809826 +0000 UTC m=+31.287169837" lastFinishedPulling="2026-03-07 01:07:29.16836813 +0000 UTC m=+35.474555367" observedRunningTime="2026-03-07 01:07:30.101899656 +0000 UTC m=+36.408086893" watchObservedRunningTime="2026-03-07 01:07:30.103560072 +0000 UTC m=+36.409747309" Mar 7 01:07:30.522427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302190556.mount: Deactivated successfully. Mar 7 01:07:30.582806 kubelet[2539]: I0307 01:07:30.582761 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:30.973788 containerd[1457]: time="2026-03-07T01:07:30.973724672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:30.974745 containerd[1457]: time="2026-03-07T01:07:30.974715616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:07:30.975152 containerd[1457]: time="2026-03-07T01:07:30.975114887Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:30.977288 containerd[1457]: time="2026-03-07T01:07:30.976819403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:30.977705 containerd[1457]: time="2026-03-07T01:07:30.977590856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.808980085s" Mar 7 01:07:30.977705 containerd[1457]: time="2026-03-07T01:07:30.977620706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:07:30.979217 containerd[1457]: time="2026-03-07T01:07:30.979125111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:07:30.982241 containerd[1457]: time="2026-03-07T01:07:30.982218392Z" level=info msg="CreateContainer within sandbox \"659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:07:31.002724 containerd[1457]: time="2026-03-07T01:07:31.002678503Z" level=info msg="CreateContainer within sandbox \"659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463\"" Mar 7 01:07:31.003350 containerd[1457]: time="2026-03-07T01:07:31.003300615Z" level=info msg="StartContainer for \"7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463\"" Mar 7 01:07:31.034401 systemd[1]: Started cri-containerd-7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463.scope - libcontainer container 7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463. Mar 7 01:07:31.082652 containerd[1457]: time="2026-03-07T01:07:31.082609246Z" level=info msg="StartContainer for \"7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463\" returns successfully" Mar 7 01:07:31.100659 kubelet[2539]: I0307 01:07:31.100626 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:31.731880 containerd[1457]: time="2026-03-07T01:07:31.730929380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:31.731880 containerd[1457]: time="2026-03-07T01:07:31.731819903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:07:31.732176 containerd[1457]: time="2026-03-07T01:07:31.732156365Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:31.734106 containerd[1457]: time="2026-03-07T01:07:31.734075271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:31.734852 containerd[1457]: time="2026-03-07T01:07:31.734829264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 755.681582ms" Mar 7 01:07:31.734928 containerd[1457]: time="2026-03-07T01:07:31.734912854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:07:31.736301 containerd[1457]: time="2026-03-07T01:07:31.736285049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:07:31.738576 containerd[1457]: time="2026-03-07T01:07:31.738415656Z" level=info msg="CreateContainer within sandbox \"90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:07:31.755830 containerd[1457]: time="2026-03-07T01:07:31.755805575Z" level=info msg="CreateContainer within sandbox \"90e16e74ebc30b59f59573ee6bd6fe9e00aca77ea91d6d326bfcf7c052a36b75\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f9dffb763643581d94a850faafa1c8966e9a5dd1fc50eba085b8f6a8ac93fb5e\"" Mar 7 01:07:31.757314 containerd[1457]: time="2026-03-07T01:07:31.757295650Z" level=info msg="StartContainer for \"f9dffb763643581d94a850faafa1c8966e9a5dd1fc50eba085b8f6a8ac93fb5e\"" Mar 7 01:07:31.802508 systemd[1]: Started cri-containerd-f9dffb763643581d94a850faafa1c8966e9a5dd1fc50eba085b8f6a8ac93fb5e.scope - libcontainer container f9dffb763643581d94a850faafa1c8966e9a5dd1fc50eba085b8f6a8ac93fb5e. Mar 7 01:07:31.833994 containerd[1457]: time="2026-03-07T01:07:31.833913660Z" level=info msg="StartContainer for \"f9dffb763643581d94a850faafa1c8966e9a5dd1fc50eba085b8f6a8ac93fb5e\" returns successfully" Mar 7 01:07:31.869098 kubelet[2539]: I0307 01:07:31.869075 2539 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:07:31.870969 kubelet[2539]: I0307 01:07:31.870948 2539 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:07:32.122277 kubelet[2539]: I0307 01:07:32.122128 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-pg6fp" podStartSLOduration=15.205636844 podStartE2EDuration="21.1221133s" podCreationTimestamp="2026-03-07 01:07:11 +0000 UTC" firstStartedPulling="2026-03-07 01:07:25.062347144 +0000 UTC m=+31.368534381" lastFinishedPulling="2026-03-07 01:07:30.97882359 +0000 UTC m=+37.285010837" observedRunningTime="2026-03-07 01:07:31.112485746 +0000 UTC m=+37.418672983" watchObservedRunningTime="2026-03-07 01:07:32.1221133 +0000 UTC m=+38.428300537" Mar 7 01:07:32.388956 containerd[1457]: time="2026-03-07T01:07:32.388773463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:32.389722 containerd[1457]: time="2026-03-07T01:07:32.389666886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:07:32.390439 containerd[1457]: time="2026-03-07T01:07:32.390409359Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:32.392437 containerd[1457]: time="2026-03-07T01:07:32.392390476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:32.393302 containerd[1457]: time="2026-03-07T01:07:32.393051998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 655.671646ms" Mar 7 01:07:32.393302 containerd[1457]: time="2026-03-07T01:07:32.393082938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:07:32.397733 containerd[1457]: time="2026-03-07T01:07:32.397619793Z" level=info msg="CreateContainer within sandbox \"4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:07:32.406564 containerd[1457]: time="2026-03-07T01:07:32.406539862Z" level=info msg="CreateContainer within sandbox \"4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"cb7f6f8cd784e9b9812bdff74f8dd44e9e3d4cd78336d7066aad66f1aecc7000\"" Mar 7 01:07:32.408509 containerd[1457]: time="2026-03-07T01:07:32.408343039Z" level=info msg="StartContainer for \"cb7f6f8cd784e9b9812bdff74f8dd44e9e3d4cd78336d7066aad66f1aecc7000\"" Mar 7 01:07:32.445402 systemd[1]: Started cri-containerd-cb7f6f8cd784e9b9812bdff74f8dd44e9e3d4cd78336d7066aad66f1aecc7000.scope - libcontainer container cb7f6f8cd784e9b9812bdff74f8dd44e9e3d4cd78336d7066aad66f1aecc7000. Mar 7 01:07:32.491252 containerd[1457]: time="2026-03-07T01:07:32.491099112Z" level=info msg="StartContainer for \"cb7f6f8cd784e9b9812bdff74f8dd44e9e3d4cd78336d7066aad66f1aecc7000\" returns successfully" Mar 7 01:07:32.494296 containerd[1457]: time="2026-03-07T01:07:32.493696981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:07:33.454183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016121100.mount: Deactivated successfully. Mar 7 01:07:33.464974 containerd[1457]: time="2026-03-07T01:07:33.464931150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:33.465674 containerd[1457]: time="2026-03-07T01:07:33.465637523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:07:33.466302 containerd[1457]: time="2026-03-07T01:07:33.466104085Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:33.474979 containerd[1457]: time="2026-03-07T01:07:33.474947263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:07:33.475539 containerd[1457]: time="2026-03-07T01:07:33.475508014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 981.782343ms" Mar 7 01:07:33.475586 containerd[1457]: time="2026-03-07T01:07:33.475544545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:07:33.479934 containerd[1457]: time="2026-03-07T01:07:33.479868029Z" level=info msg="CreateContainer within sandbox \"4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:07:33.486911 containerd[1457]: time="2026-03-07T01:07:33.486841181Z" level=info msg="CreateContainer within sandbox \"4e26e0005fdf93c43da868f17b6eafb63dcf025128db92b2b0e8c80c036571b8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b37c155cf8380e0bb7ce98393eb318aa30337b994df2ab8fa508d9251e9c3511\"" Mar 7 01:07:33.487993 containerd[1457]: time="2026-03-07T01:07:33.487957265Z" level=info msg="StartContainer for \"b37c155cf8380e0bb7ce98393eb318aa30337b994df2ab8fa508d9251e9c3511\"" Mar 7 01:07:33.522408 systemd[1]: Started cri-containerd-b37c155cf8380e0bb7ce98393eb318aa30337b994df2ab8fa508d9251e9c3511.scope - libcontainer container b37c155cf8380e0bb7ce98393eb318aa30337b994df2ab8fa508d9251e9c3511. Mar 7 01:07:33.571386 containerd[1457]: time="2026-03-07T01:07:33.571345725Z" level=info msg="StartContainer for \"b37c155cf8380e0bb7ce98393eb318aa30337b994df2ab8fa508d9251e9c3511\" returns successfully" Mar 7 01:07:33.887722 kubelet[2539]: I0307 01:07:33.887602 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:33.989518 kubelet[2539]: I0307 01:07:33.989443 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wh6l9" podStartSLOduration=13.655507547 podStartE2EDuration="21.989429746s" podCreationTimestamp="2026-03-07 01:07:12 +0000 UTC" firstStartedPulling="2026-03-07 01:07:23.401853268 +0000 UTC m=+29.708040505" lastFinishedPulling="2026-03-07 01:07:31.735775467 +0000 UTC m=+38.041962704" observedRunningTime="2026-03-07 01:07:32.124040536 +0000 UTC m=+38.430227793" watchObservedRunningTime="2026-03-07 01:07:33.989429746 +0000 UTC m=+40.295616983" Mar 7 01:07:34.647424 systemd[1]: run-containerd-runc-k8s.io-7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463-runc.QXNNhN.mount: Deactivated successfully. Mar 7 01:07:43.410090 kubelet[2539]: I0307 01:07:43.409428 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:43.488319 kubelet[2539]: I0307 01:07:43.487518 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f857b6df5-v5xkq" podStartSLOduration=11.398365234 podStartE2EDuration="19.487498058s" podCreationTimestamp="2026-03-07 01:07:24 +0000 UTC" firstStartedPulling="2026-03-07 01:07:25.387123563 +0000 UTC m=+31.693310800" lastFinishedPulling="2026-03-07 01:07:33.476256387 +0000 UTC m=+39.782443624" observedRunningTime="2026-03-07 01:07:34.120717021 +0000 UTC m=+40.426904258" watchObservedRunningTime="2026-03-07 01:07:43.487498058 +0000 UTC m=+49.793685325" Mar 7 01:07:44.701794 kubelet[2539]: I0307 01:07:44.701308 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:07:52.464357 systemd[1]: Started sshd@7-172.239.198.121:22-185.156.73.233:16018.service - OpenSSH per-connection server daemon (185.156.73.233:16018). Mar 7 01:07:53.777539 containerd[1457]: time="2026-03-07T01:07:53.777465094Z" level=info msg="StopPodSandbox for \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\"" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.818 [WARNING][5239] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc0c6376-e9a6-43ac-9b83-1647076d0c22", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb", Pod:"coredns-674b8bbfcf-shk8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb3b20c027", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.818 [INFO][5239] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.818 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" iface="eth0" netns="" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.819 [INFO][5239] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.819 [INFO][5239] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.837 [INFO][5248] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.837 [INFO][5248] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.837 [INFO][5248] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.842 [WARNING][5248] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.842 [INFO][5248] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.843 [INFO][5248] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:53.849145 containerd[1457]: 2026-03-07 01:07:53.846 [INFO][5239] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.849145 containerd[1457]: time="2026-03-07T01:07:53.849021731Z" level=info msg="TearDown network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\" successfully" Mar 7 01:07:53.849145 containerd[1457]: time="2026-03-07T01:07:53.849047001Z" level=info msg="StopPodSandbox for \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\" returns successfully" Mar 7 01:07:53.850421 containerd[1457]: time="2026-03-07T01:07:53.849607963Z" level=info msg="RemovePodSandbox for \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\"" Mar 7 01:07:53.850421 containerd[1457]: time="2026-03-07T01:07:53.849823274Z" level=info msg="Forcibly stopping sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\"" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.888 [WARNING][5262] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fc0c6376-e9a6-43ac-9b83-1647076d0c22", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"31b3d025e9766e769e718000106d13fe0f5d21a26c1fcaa997481a61f34bf2bb", Pod:"coredns-674b8bbfcf-shk8j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb3b20c027", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.888 [INFO][5262] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.888 [INFO][5262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" iface="eth0" netns="" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.888 [INFO][5262] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.888 [INFO][5262] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.914 [INFO][5270] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.914 [INFO][5270] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.915 [INFO][5270] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.920 [WARNING][5270] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.920 [INFO][5270] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" HandleID="k8s-pod-network.247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--shk8j-eth0" Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.922 [INFO][5270] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:53.926624 containerd[1457]: 2026-03-07 01:07:53.924 [INFO][5262] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575" Mar 7 01:07:53.927086 containerd[1457]: time="2026-03-07T01:07:53.926658683Z" level=info msg="TearDown network for sandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\" successfully" Mar 7 01:07:53.931149 containerd[1457]: time="2026-03-07T01:07:53.931125083Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:07:53.931219 containerd[1457]: time="2026-03-07T01:07:53.931181073Z" level=info msg="RemovePodSandbox \"247020b391130e578570ddc14a390c009204f12c10d5129d9a402d76bc18d575\" returns successfully" Mar 7 01:07:53.931667 containerd[1457]: time="2026-03-07T01:07:53.931647734Z" level=info msg="StopPodSandbox for \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\"" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.965 [WARNING][5284] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0", GenerateName:"calico-kube-controllers-84fcdd589f-", Namespace:"calico-system", SelfLink:"", UID:"b2c5388f-e7db-4945-8a7f-ca5fddbf9992", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fcdd589f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d", Pod:"calico-kube-controllers-84fcdd589f-7w9zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid521e0effb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.965 [INFO][5284] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.965 [INFO][5284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" iface="eth0" netns="" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.965 [INFO][5284] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.965 [INFO][5284] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.987 [INFO][5291] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.988 [INFO][5291] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.988 [INFO][5291] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.992 [WARNING][5291] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.992 [INFO][5291] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:53.994 [INFO][5291] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.006312 containerd[1457]: 2026-03-07 01:07:54.000 [INFO][5284] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.006845 containerd[1457]: time="2026-03-07T01:07:54.006329790Z" level=info msg="TearDown network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\" successfully" Mar 7 01:07:54.006845 containerd[1457]: time="2026-03-07T01:07:54.006353160Z" level=info msg="StopPodSandbox for \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\" returns successfully" Mar 7 01:07:54.006845 containerd[1457]: time="2026-03-07T01:07:54.006780051Z" level=info msg="RemovePodSandbox for \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\"" Mar 7 01:07:54.006845 containerd[1457]: time="2026-03-07T01:07:54.006802741Z" level=info msg="Forcibly stopping sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\"" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.042 [WARNING][5305] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0", GenerateName:"calico-kube-controllers-84fcdd589f-", Namespace:"calico-system", SelfLink:"", UID:"b2c5388f-e7db-4945-8a7f-ca5fddbf9992", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fcdd589f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"29434728e7d9c9816f063962a2f6eb6b9ca5e75b2400ae8ebdf03222d960bc6d", Pod:"calico-kube-controllers-84fcdd589f-7w9zs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid521e0effb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.043 [INFO][5305] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.043 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" iface="eth0" netns="" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.043 [INFO][5305] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.043 [INFO][5305] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.063 [INFO][5313] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.063 [INFO][5313] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.063 [INFO][5313] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.070 [WARNING][5313] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.070 [INFO][5313] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" HandleID="k8s-pod-network.31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Workload="172--239--198--121-k8s-calico--kube--controllers--84fcdd589f--7w9zs-eth0" Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.072 [INFO][5313] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.077421 containerd[1457]: 2026-03-07 01:07:54.074 [INFO][5305] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df" Mar 7 01:07:54.077934 containerd[1457]: time="2026-03-07T01:07:54.077906815Z" level=info msg="TearDown network for sandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\" successfully" Mar 7 01:07:54.081478 containerd[1457]: time="2026-03-07T01:07:54.081453243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:07:54.081564 containerd[1457]: time="2026-03-07T01:07:54.081503593Z" level=info msg="RemovePodSandbox \"31a22b543a6c3ab693de1cf6fab15e9f50223267079f13216bc5d35eadd960df\" returns successfully" Mar 7 01:07:54.081879 containerd[1457]: time="2026-03-07T01:07:54.081860035Z" level=info msg="StopPodSandbox for \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\"" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.113 [WARNING][5327] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" WorkloadEndpoint="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.113 [INFO][5327] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.113 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" iface="eth0" netns="" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.113 [INFO][5327] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.113 [INFO][5327] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.135 [INFO][5334] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.135 [INFO][5334] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.135 [INFO][5334] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.139 [WARNING][5334] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.139 [INFO][5334] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.141 [INFO][5334] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.145993 containerd[1457]: 2026-03-07 01:07:54.143 [INFO][5327] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.145993 containerd[1457]: time="2026-03-07T01:07:54.145917733Z" level=info msg="TearDown network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\" successfully" Mar 7 01:07:54.145993 containerd[1457]: time="2026-03-07T01:07:54.145938013Z" level=info msg="StopPodSandbox for \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\" returns successfully" Mar 7 01:07:54.146792 containerd[1457]: time="2026-03-07T01:07:54.146744565Z" level=info msg="RemovePodSandbox for \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\"" Mar 7 01:07:54.146792 containerd[1457]: time="2026-03-07T01:07:54.146768295Z" level=info msg="Forcibly stopping sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\"" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.177 [WARNING][5349] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" WorkloadEndpoint="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.177 [INFO][5349] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.177 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" iface="eth0" netns="" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.177 [INFO][5349] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.177 [INFO][5349] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.196 [INFO][5357] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.196 [INFO][5357] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.196 [INFO][5357] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.202 [WARNING][5357] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.202 [INFO][5357] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" HandleID="k8s-pod-network.ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Workload="172--239--198--121-k8s-whisker--98c65c778--2zvfj-eth0" Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.203 [INFO][5357] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.208215 containerd[1457]: 2026-03-07 01:07:54.205 [INFO][5349] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526" Mar 7 01:07:54.208215 containerd[1457]: time="2026-03-07T01:07:54.207971697Z" level=info msg="TearDown network for sandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\" successfully" Mar 7 01:07:54.211660 containerd[1457]: time="2026-03-07T01:07:54.211620075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:07:54.211733 containerd[1457]: time="2026-03-07T01:07:54.211674875Z" level=info msg="RemovePodSandbox \"ac4d8807a0b08a265be1c12a28bf5deae974daed9f96362ed26697e96a769526\" returns successfully" Mar 7 01:07:54.212154 containerd[1457]: time="2026-03-07T01:07:54.212135286Z" level=info msg="StopPodSandbox for \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\"" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.257 [WARNING][5371] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3", Pod:"coredns-674b8bbfcf-9wt7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0890953b440", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.257 [INFO][5371] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.257 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" iface="eth0" netns="" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.257 [INFO][5371] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.257 [INFO][5371] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.281 [INFO][5378] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.281 [INFO][5378] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.281 [INFO][5378] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.287 [WARNING][5378] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.287 [INFO][5378] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.288 [INFO][5378] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.293330 containerd[1457]: 2026-03-07 01:07:54.290 [INFO][5371] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.293330 containerd[1457]: time="2026-03-07T01:07:54.293194594Z" level=info msg="TearDown network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\" successfully" Mar 7 01:07:54.293330 containerd[1457]: time="2026-03-07T01:07:54.293218794Z" level=info msg="StopPodSandbox for \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\" returns successfully" Mar 7 01:07:54.294001 containerd[1457]: time="2026-03-07T01:07:54.293717835Z" level=info msg="RemovePodSandbox for \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\"" Mar 7 01:07:54.294001 containerd[1457]: time="2026-03-07T01:07:54.293741005Z" level=info msg="Forcibly stopping sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\"" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.328 [WARNING][5393] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee7f7577-2f7b-4c36-bfd3-e0c694ed04f3", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"8e51b615fcd134b19e2e2ec7461681f0066530b6266c7709aba05164bc15a8d3", Pod:"coredns-674b8bbfcf-9wt7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0890953b440", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.328 [INFO][5393] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.328 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" iface="eth0" netns="" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.328 [INFO][5393] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.328 [INFO][5393] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.350 [INFO][5400] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.350 [INFO][5400] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.350 [INFO][5400] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.356 [WARNING][5400] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.356 [INFO][5400] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" HandleID="k8s-pod-network.fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Workload="172--239--198--121-k8s-coredns--674b8bbfcf--9wt7m-eth0" Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.357 [INFO][5400] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.362853 containerd[1457]: 2026-03-07 01:07:54.360 [INFO][5393] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d" Mar 7 01:07:54.363222 containerd[1457]: time="2026-03-07T01:07:54.362846166Z" level=info msg="TearDown network for sandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\" successfully" Mar 7 01:07:54.367080 containerd[1457]: time="2026-03-07T01:07:54.367057045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:07:54.367173 containerd[1457]: time="2026-03-07T01:07:54.367121505Z" level=info msg="RemovePodSandbox \"fd73e597fc76edc7df941f0ac5cce40ff3d38b96302b5e24b4963f60065f279d\" returns successfully" Mar 7 01:07:54.367550 containerd[1457]: time="2026-03-07T01:07:54.367521856Z" level=info msg="StopPodSandbox for \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\"" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.397 [WARNING][5414] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"1cf94d0a-9aa0-4302-a083-681de00390a5", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76", Pod:"calico-apiserver-7fd8959695-wdzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8bd6057a2ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.398 [INFO][5414] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.398 [INFO][5414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" iface="eth0" netns="" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.398 [INFO][5414] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.398 [INFO][5414] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.417 [INFO][5422] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.418 [INFO][5422] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.418 [INFO][5422] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.423 [WARNING][5422] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.423 [INFO][5422] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.424 [INFO][5422] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.432045 containerd[1457]: 2026-03-07 01:07:54.429 [INFO][5414] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.432487 containerd[1457]: time="2026-03-07T01:07:54.432080316Z" level=info msg="TearDown network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\" successfully" Mar 7 01:07:54.432487 containerd[1457]: time="2026-03-07T01:07:54.432104646Z" level=info msg="StopPodSandbox for \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\" returns successfully" Mar 7 01:07:54.432552 containerd[1457]: time="2026-03-07T01:07:54.432519267Z" level=info msg="RemovePodSandbox for \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\"" Mar 7 01:07:54.432651 containerd[1457]: time="2026-03-07T01:07:54.432635287Z" level=info msg="Forcibly stopping sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\"" Mar 7 01:07:54.493822 sshd[5228]: Invalid user admin from 185.156.73.233 port 16018 Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.463 [WARNING][5436] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"1cf94d0a-9aa0-4302-a083-681de00390a5", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"e208ed3d70b0a1f52861ff55dbbad04e532c68aed4bff49e76b4da12f89e8d76", Pod:"calico-apiserver-7fd8959695-wdzzd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8bd6057a2ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.464 [INFO][5436] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.464 [INFO][5436] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" iface="eth0" netns="" Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.464 [INFO][5436] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.464 [INFO][5436] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.486 [INFO][5443] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.486 [INFO][5443] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.486 [INFO][5443] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.496 [WARNING][5443] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.496 [INFO][5443] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" HandleID="k8s-pod-network.40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--wdzzd-eth0" Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.497 [INFO][5443] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.503384 containerd[1457]: 2026-03-07 01:07:54.500 [INFO][5436] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e" Mar 7 01:07:54.503761 containerd[1457]: time="2026-03-07T01:07:54.503444401Z" level=info msg="TearDown network for sandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\" successfully" Mar 7 01:07:54.506962 containerd[1457]: time="2026-03-07T01:07:54.506939100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:07:54.507305 containerd[1457]: time="2026-03-07T01:07:54.506991180Z" level=info msg="RemovePodSandbox \"40824db6cabae0314fed74612f9ed107bc177924e51541b1aa591a712fae352e\" returns successfully" Mar 7 01:07:54.507467 containerd[1457]: time="2026-03-07T01:07:54.507446941Z" level=info msg="StopPodSandbox for \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\"" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.538 [WARNING][5457] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2754896e-11fe-452b-a84a-c172f3237c2d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628", Pod:"goldmane-5b85766d88-pg6fp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali205253efd32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.538 [INFO][5457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.538 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" iface="eth0" netns="" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.538 [INFO][5457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.538 [INFO][5457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.562 [INFO][5464] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.562 [INFO][5464] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.562 [INFO][5464] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.567 [WARNING][5464] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.567 [INFO][5464] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.569 [INFO][5464] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.573305 containerd[1457]: 2026-03-07 01:07:54.571 [INFO][5457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.573735 containerd[1457]: time="2026-03-07T01:07:54.573705724Z" level=info msg="TearDown network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\" successfully" Mar 7 01:07:54.573735 containerd[1457]: time="2026-03-07T01:07:54.573734674Z" level=info msg="StopPodSandbox for \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\" returns successfully" Mar 7 01:07:54.574113 containerd[1457]: time="2026-03-07T01:07:54.574094465Z" level=info msg="RemovePodSandbox for \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\"" Mar 7 01:07:54.574171 containerd[1457]: time="2026-03-07T01:07:54.574121575Z" level=info msg="Forcibly stopping sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\"" Mar 7 01:07:54.620396 sshd[5228]: Connection closed by invalid user admin 185.156.73.233 port 16018 [preauth] Mar 7 01:07:54.622896 systemd[1]: sshd@7-172.239.198.121:22-185.156.73.233:16018.service: Deactivated successfully. Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.605 [WARNING][5478] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2754896e-11fe-452b-a84a-c172f3237c2d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"659bb48862c34bda3632d26d3e00bca3ed6209706e907674d59890173cd9a628", Pod:"goldmane-5b85766d88-pg6fp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali205253efd32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.606 [INFO][5478] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.606 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" iface="eth0" netns="" Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.606 [INFO][5478] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.606 [INFO][5478] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.630 [INFO][5485] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.630 [INFO][5485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.630 [INFO][5485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.635 [WARNING][5485] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.635 [INFO][5485] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" HandleID="k8s-pod-network.807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Workload="172--239--198--121-k8s-goldmane--5b85766d88--pg6fp-eth0" Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.636 [INFO][5485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.641780 containerd[1457]: 2026-03-07 01:07:54.639 [INFO][5478] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5" Mar 7 01:07:54.642183 containerd[1457]: time="2026-03-07T01:07:54.641817171Z" level=info msg="TearDown network for sandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\" successfully" Mar 7 01:07:54.647104 containerd[1457]: time="2026-03-07T01:07:54.646957764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:07:54.647104 containerd[1457]: time="2026-03-07T01:07:54.647011344Z" level=info msg="RemovePodSandbox \"807979e2cb57b641232ef5dad05ef476c05c8c33a91742a8ca2424dbde5392d5\" returns successfully" Mar 7 01:07:54.647765 containerd[1457]: time="2026-03-07T01:07:54.647652275Z" level=info msg="StopPodSandbox for \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\"" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.675 [WARNING][5502] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"ec64a8fa-f04f-42f8-b5ba-1f4af3044695", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100", Pod:"calico-apiserver-7fd8959695-r25tl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib61dc450fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.675 [INFO][5502] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.675 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" iface="eth0" netns="" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.675 [INFO][5502] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.675 [INFO][5502] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.696 [INFO][5509] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.696 [INFO][5509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.696 [INFO][5509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.701 [WARNING][5509] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.701 [INFO][5509] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.702 [INFO][5509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.706644 containerd[1457]: 2026-03-07 01:07:54.704 [INFO][5502] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.707066 containerd[1457]: time="2026-03-07T01:07:54.706717332Z" level=info msg="TearDown network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\" successfully" Mar 7 01:07:54.707066 containerd[1457]: time="2026-03-07T01:07:54.706741312Z" level=info msg="StopPodSandbox for \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\" returns successfully" Mar 7 01:07:54.707480 containerd[1457]: time="2026-03-07T01:07:54.707179843Z" level=info msg="RemovePodSandbox for \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\"" Mar 7 01:07:54.707480 containerd[1457]: time="2026-03-07T01:07:54.707211513Z" level=info msg="Forcibly stopping sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\"" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.742 [WARNING][5523] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0", GenerateName:"calico-apiserver-7fd8959695-", Namespace:"calico-system", SelfLink:"", UID:"ec64a8fa-f04f-42f8-b5ba-1f4af3044695", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd8959695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-198-121", ContainerID:"9029f16008b0153e79df83c5534ecf1394c6d5e1993271511fd8edc1bc523100", Pod:"calico-apiserver-7fd8959695-r25tl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib61dc450fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.742 [INFO][5523] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.742 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" iface="eth0" netns="" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.742 [INFO][5523] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.742 [INFO][5523] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.766 [INFO][5530] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.766 [INFO][5530] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.766 [INFO][5530] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.771 [WARNING][5530] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.771 [INFO][5530] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" HandleID="k8s-pod-network.0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Workload="172--239--198--121-k8s-calico--apiserver--7fd8959695--r25tl-eth0" Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.773 [INFO][5530] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:07:54.778492 containerd[1457]: 2026-03-07 01:07:54.775 [INFO][5523] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98" Mar 7 01:07:54.779223 containerd[1457]: time="2026-03-07T01:07:54.779187350Z" level=info msg="TearDown network for sandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\" successfully" Mar 7 01:07:54.782765 containerd[1457]: time="2026-03-07T01:07:54.782739538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:07:54.782850 containerd[1457]: time="2026-03-07T01:07:54.782794818Z" level=info msg="RemovePodSandbox \"0fe19a559523551822ae8a27e2072d6345810ed1562636054bd4dad9bdad6d98\" returns successfully" Mar 7 01:08:03.133100 systemd[1]: run-containerd-runc-k8s.io-7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463-runc.jn8DCn.mount: Deactivated successfully. Mar 7 01:08:12.787161 kubelet[2539]: E0307 01:08:12.787089 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:08:18.786830 kubelet[2539]: E0307 01:08:18.786795 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:08:21.787525 kubelet[2539]: E0307 01:08:21.786668 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:08:27.787440 kubelet[2539]: E0307 01:08:27.786409 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:08:29.787313 kubelet[2539]: E0307 01:08:29.786805 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:08:34.786441 kubelet[2539]: E0307 01:08:34.786408 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:08:48.488440 systemd[1]: Started sshd@8-172.239.198.121:22-68.220.241.50:49040.service - OpenSSH per-connection server daemon (68.220.241.50:49040). Mar 7 01:08:48.647702 sshd[5744]: Accepted publickey for core from 68.220.241.50 port 49040 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:08:48.648568 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:48.653711 systemd-logind[1442]: New session 8 of user core. Mar 7 01:08:48.657392 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:08:48.873011 sshd[5744]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:48.878021 systemd[1]: sshd@8-172.239.198.121:22-68.220.241.50:49040.service: Deactivated successfully. Mar 7 01:08:48.881972 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:08:48.883568 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:08:48.884935 systemd-logind[1442]: Removed session 8. Mar 7 01:08:53.788303 kubelet[2539]: E0307 01:08:53.788192 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Mar 7 01:08:53.904000 systemd[1]: Started sshd@9-172.239.198.121:22-68.220.241.50:58198.service - OpenSSH per-connection server daemon (68.220.241.50:58198). Mar 7 01:08:54.058340 sshd[5760]: Accepted publickey for core from 68.220.241.50 port 58198 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:08:54.058951 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:54.064401 systemd-logind[1442]: New session 9 of user core. Mar 7 01:08:54.067597 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:08:54.283071 sshd[5760]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:54.288821 systemd[1]: sshd@9-172.239.198.121:22-68.220.241.50:58198.service: Deactivated successfully. Mar 7 01:08:54.291521 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:08:54.292472 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:08:54.294123 systemd-logind[1442]: Removed session 9. Mar 7 01:08:59.313368 systemd[1]: Started sshd@10-172.239.198.121:22-68.220.241.50:58206.service - OpenSSH per-connection server daemon (68.220.241.50:58206). Mar 7 01:08:59.475640 sshd[5786]: Accepted publickey for core from 68.220.241.50 port 58206 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:08:59.477445 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:59.481912 systemd-logind[1442]: New session 10 of user core. Mar 7 01:08:59.486401 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:08:59.673397 sshd[5786]: pam_unix(sshd:session): session closed for user core Mar 7 01:08:59.678850 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:08:59.679591 systemd[1]: sshd@10-172.239.198.121:22-68.220.241.50:58206.service: Deactivated successfully. Mar 7 01:08:59.682081 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:08:59.683024 systemd-logind[1442]: Removed session 10. Mar 7 01:08:59.707451 systemd[1]: Started sshd@11-172.239.198.121:22-68.220.241.50:58210.service - OpenSSH per-connection server daemon (68.220.241.50:58210). Mar 7 01:08:59.874663 sshd[5801]: Accepted publickey for core from 68.220.241.50 port 58210 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:08:59.876756 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:08:59.882018 systemd-logind[1442]: New session 11 of user core. Mar 7 01:08:59.887404 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:09:00.118607 sshd[5801]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:00.125084 systemd[1]: sshd@11-172.239.198.121:22-68.220.241.50:58210.service: Deactivated successfully. Mar 7 01:09:00.129002 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:09:00.131431 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:09:00.133875 systemd-logind[1442]: Removed session 11. Mar 7 01:09:00.160573 systemd[1]: Started sshd@12-172.239.198.121:22-68.220.241.50:58216.service - OpenSSH per-connection server daemon (68.220.241.50:58216). Mar 7 01:09:00.318649 sshd[5816]: Accepted publickey for core from 68.220.241.50 port 58216 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:00.320554 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:00.328413 systemd-logind[1442]: New session 12 of user core. Mar 7 01:09:00.335388 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:09:00.540989 sshd[5816]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:00.545141 systemd[1]: sshd@12-172.239.198.121:22-68.220.241.50:58216.service: Deactivated successfully. Mar 7 01:09:00.549244 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:09:00.549856 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:09:00.550812 systemd-logind[1442]: Removed session 12. Mar 7 01:09:03.129051 systemd[1]: run-containerd-runc-k8s.io-7faee0541bfefe6a142666e6dff53a163adb392cfad668ffb0ef45df6aa40463-runc.GEVZWP.mount: Deactivated successfully. Mar 7 01:09:05.575694 systemd[1]: Started sshd@13-172.239.198.121:22-68.220.241.50:54940.service - OpenSSH per-connection server daemon (68.220.241.50:54940). Mar 7 01:09:05.750707 sshd[5909]: Accepted publickey for core from 68.220.241.50 port 54940 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:05.752735 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:05.758397 systemd-logind[1442]: New session 13 of user core. Mar 7 01:09:05.763407 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:09:05.969606 sshd[5909]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:05.974944 systemd[1]: sshd@13-172.239.198.121:22-68.220.241.50:54940.service: Deactivated successfully. Mar 7 01:09:05.975377 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:09:05.977871 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:09:05.979077 systemd-logind[1442]: Removed session 13. Mar 7 01:09:06.012991 systemd[1]: Started sshd@14-172.239.198.121:22-68.220.241.50:54942.service - OpenSSH per-connection server daemon (68.220.241.50:54942). Mar 7 01:09:06.207653 sshd[5922]: Accepted publickey for core from 68.220.241.50 port 54942 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:06.208351 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:06.213799 systemd-logind[1442]: New session 14 of user core. Mar 7 01:09:06.217411 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:09:06.573322 sshd[5922]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:06.577260 systemd[1]: sshd@14-172.239.198.121:22-68.220.241.50:54942.service: Deactivated successfully. Mar 7 01:09:06.579419 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:09:06.580041 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:09:06.581189 systemd-logind[1442]: Removed session 14. Mar 7 01:09:06.599524 systemd[1]: Started sshd@15-172.239.198.121:22-68.220.241.50:54944.service - OpenSSH per-connection server daemon (68.220.241.50:54944). Mar 7 01:09:06.759836 sshd[5933]: Accepted publickey for core from 68.220.241.50 port 54944 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:06.761367 sshd[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:06.766293 systemd-logind[1442]: New session 15 of user core. Mar 7 01:09:06.772388 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:09:07.452288 sshd[5933]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:07.458832 systemd[1]: sshd@15-172.239.198.121:22-68.220.241.50:54944.service: Deactivated successfully. Mar 7 01:09:07.461823 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:09:07.463959 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:09:07.465394 systemd-logind[1442]: Removed session 15. Mar 7 01:09:07.491645 systemd[1]: Started sshd@16-172.239.198.121:22-68.220.241.50:54946.service - OpenSSH per-connection server daemon (68.220.241.50:54946). Mar 7 01:09:07.646292 sshd[5957]: Accepted publickey for core from 68.220.241.50 port 54946 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:07.646950 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:07.652018 systemd-logind[1442]: New session 16 of user core. Mar 7 01:09:07.659904 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:09:07.994189 sshd[5957]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:07.998142 systemd[1]: sshd@16-172.239.198.121:22-68.220.241.50:54946.service: Deactivated successfully. Mar 7 01:09:08.000490 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:09:08.002234 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:09:08.004307 systemd-logind[1442]: Removed session 16. Mar 7 01:09:08.031651 systemd[1]: Started sshd@17-172.239.198.121:22-68.220.241.50:54958.service - OpenSSH per-connection server daemon (68.220.241.50:54958). Mar 7 01:09:08.183104 sshd[5970]: Accepted publickey for core from 68.220.241.50 port 54958 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:08.184897 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:08.189933 systemd-logind[1442]: New session 17 of user core. Mar 7 01:09:08.193415 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:09:08.383592 sshd[5970]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:08.389713 systemd[1]: sshd@17-172.239.198.121:22-68.220.241.50:54958.service: Deactivated successfully. Mar 7 01:09:08.392741 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:09:08.394131 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:09:08.395394 systemd-logind[1442]: Removed session 17. Mar 7 01:09:13.414632 systemd[1]: Started sshd@18-172.239.198.121:22-68.220.241.50:39606.service - OpenSSH per-connection server daemon (68.220.241.50:39606). Mar 7 01:09:13.485590 systemd[1]: run-containerd-runc-k8s.io-866280a7cb9d44ffa38359c7c3cdeb98d4270a8e07781d43d017bd44ea9ec45d-runc.EhpF0H.mount: Deactivated successfully. Mar 7 01:09:13.565001 sshd[5985]: Accepted publickey for core from 68.220.241.50 port 39606 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:13.567448 sshd[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:13.571670 systemd-logind[1442]: New session 18 of user core. Mar 7 01:09:13.577436 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:09:13.756467 sshd[5985]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:13.760959 systemd[1]: sshd@18-172.239.198.121:22-68.220.241.50:39606.service: Deactivated successfully. Mar 7 01:09:13.763944 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:09:13.764615 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:09:13.765446 systemd-logind[1442]: Removed session 18. Mar 7 01:09:18.786357 systemd[1]: Started sshd@19-172.239.198.121:22-68.220.241.50:39622.service - OpenSSH per-connection server daemon (68.220.241.50:39622). Mar 7 01:09:18.938698 sshd[6016]: Accepted publickey for core from 68.220.241.50 port 39622 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:09:18.940152 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:18.944113 systemd-logind[1442]: New session 19 of user core. Mar 7 01:09:18.952453 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:09:19.142984 sshd[6016]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:19.148386 systemd[1]: sshd@19-172.239.198.121:22-68.220.241.50:39622.service: Deactivated successfully. Mar 7 01:09:19.149603 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:09:19.150962 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:09:19.151963 systemd-logind[1442]: Removed session 19.