Mar 7 01:21:50.015945 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:21:50.015968 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:21:50.015976 kernel: BIOS-provided physical RAM map: Mar 7 01:21:50.015983 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:21:50.015988 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:21:50.015997 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:21:50.016004 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:21:50.016010 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:21:50.016015 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:21:50.016021 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:21:50.016027 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:21:50.016033 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:21:50.016038 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:21:50.016047 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:21:50.016054 kernel: NX (Execute Disable) protection: active Mar 7 01:21:50.016060 kernel: APIC: Static calls initialized Mar 7 01:21:50.016066 kernel: SMBIOS 2.8 present. Mar 7 01:21:50.016072 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:21:50.016078 kernel: Hypervisor detected: KVM Mar 7 01:21:50.016087 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:21:50.016093 kernel: kvm-clock: using sched offset of 5927565045 cycles Mar 7 01:21:50.016099 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:21:50.016106 kernel: tsc: Detected 1999.999 MHz processor Mar 7 01:21:50.016112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:21:50.016119 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:21:50.016125 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:21:50.016131 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:21:50.016137 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:21:50.016146 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:21:50.016152 kernel: Using GB pages for direct mapping Mar 7 01:21:50.016158 kernel: ACPI: Early table checksum verification disabled Mar 7 01:21:50.016164 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:21:50.016170 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016176 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016183 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016189 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:21:50.016195 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016203 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016209 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016216 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016226 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:21:50.016233 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:21:50.016239 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:21:50.016248 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:21:50.016255 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:21:50.016261 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:21:50.016267 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:21:50.016274 kernel: No NUMA configuration found Mar 7 01:21:50.016280 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:21:50.016286 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Mar 7 01:21:50.016293 kernel: Zone ranges: Mar 7 01:21:50.016302 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:21:50.016308 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:21:50.016315 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:21:50.016321 kernel: Movable zone start for each node Mar 7 01:21:50.016327 kernel: Early memory node ranges Mar 7 01:21:50.016334 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:21:50.016340 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:21:50.016347 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:21:50.016353 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:21:50.016360 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:21:50.016369 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:21:50.016375 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:21:50.016381 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:21:50.016388 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:21:50.016394 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:21:50.016401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:21:50.016407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:21:50.016414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:21:50.016420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:21:50.016429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:21:50.016435 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:21:50.016442 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:21:50.016449 kernel: TSC deadline timer available Mar 7 01:21:50.016455 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:21:50.016462 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:21:50.016468 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:21:50.016474 kernel: kvm-guest: setup PV sched yield Mar 7 01:21:50.016481 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:21:50.016490 kernel: Booting paravirtualized kernel on KVM Mar 7 01:21:50.016497 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:21:50.016503 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:21:50.016509 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:21:50.016516 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:21:50.016522 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:21:50.016528 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:21:50.016719 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:21:50.016727 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:21:50.016736 kernel: random: crng init done Mar 7 01:21:50.016742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:21:50.016749 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:21:50.016755 kernel: Fallback order for Node 0: 0 Mar 7 01:21:50.016762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:21:50.016768 kernel: Policy zone: Normal Mar 7 01:21:50.016774 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:21:50.016781 kernel: software IO TLB: area num 2. Mar 7 01:21:50.016790 kernel: Memory: 3966208K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227304K reserved, 0K cma-reserved) Mar 7 01:21:50.016796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:21:50.016802 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:21:50.016809 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:21:50.016816 kernel: Dynamic Preempt: voluntary Mar 7 01:21:50.016822 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:21:50.016829 kernel: rcu: RCU event tracing is enabled. Mar 7 01:21:50.016836 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:21:50.016843 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:21:50.016852 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:21:50.016859 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:21:50.016865 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:21:50.016872 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:21:50.016878 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:21:50.016884 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:21:50.016891 kernel: Console: colour VGA+ 80x25 Mar 7 01:21:50.016897 kernel: printk: console [tty0] enabled Mar 7 01:21:50.016903 kernel: printk: console [ttyS0] enabled Mar 7 01:21:50.016932 kernel: ACPI: Core revision 20230628 Mar 7 01:21:50.016939 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:21:50.016946 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:21:50.016952 kernel: x2apic enabled Mar 7 01:21:50.016967 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:21:50.016977 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:21:50.016984 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:21:50.016990 kernel: kvm-guest: setup PV IPIs Mar 7 01:21:50.016997 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:21:50.017004 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:21:50.017010 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Mar 7 01:21:50.017017 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:21:50.017026 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:21:50.017033 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:21:50.017040 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:21:50.017046 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:21:50.017053 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:21:50.017063 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:21:50.017070 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:21:50.017077 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:21:50.017083 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:21:50.017091 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:21:50.017097 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:21:50.017104 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:21:50.017111 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:21:50.017120 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:21:50.017127 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:21:50.017134 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:21:50.017141 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:21:50.017147 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:21:50.017154 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:21:50.017161 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:21:50.017168 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:21:50.017174 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:21:50.017184 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:21:50.017191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:21:50.017197 kernel: landlock: Up and running. Mar 7 01:21:50.017204 kernel: SELinux: Initializing. Mar 7 01:21:50.017211 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.017217 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.017224 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:21:50.017231 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:21:50.017238 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:21:50.017247 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:21:50.017254 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:21:50.017261 kernel: ... version: 0 Mar 7 01:21:50.017267 kernel: ... bit width: 48 Mar 7 01:21:50.017274 kernel: ... generic registers: 6 Mar 7 01:21:50.017281 kernel: ... value mask: 0000ffffffffffff Mar 7 01:21:50.017287 kernel: ... max period: 00007fffffffffff Mar 7 01:21:50.017294 kernel: ... fixed-purpose events: 0 Mar 7 01:21:50.017301 kernel: ... event mask: 000000000000003f Mar 7 01:21:50.017310 kernel: signal: max sigframe size: 3376 Mar 7 01:21:50.017317 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:21:50.017323 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:21:50.017330 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:21:50.017337 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:21:50.017343 kernel: .... node #0, CPUs: #1 Mar 7 01:21:50.017350 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:21:50.017357 kernel: smpboot: Max logical packages: 1 Mar 7 01:21:50.017363 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 7 01:21:50.017372 kernel: devtmpfs: initialized Mar 7 01:21:50.017379 kernel: x86/mm: Memory block size: 128MB Mar 7 01:21:50.017386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:21:50.017393 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:21:50.017399 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:21:50.017406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:21:50.017413 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:21:50.017420 kernel: audit: type=2000 audit(1772846509.129:1): state=initialized audit_enabled=0 res=1 Mar 7 01:21:50.017427 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:21:50.017436 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:21:50.017443 kernel: cpuidle: using governor menu Mar 7 01:21:50.017636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:21:50.017642 kernel: dca service started, version 1.12.1 Mar 7 01:21:50.017649 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:21:50.017656 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:21:50.017663 kernel: PCI: Using configuration type 1 for base access Mar 7 01:21:50.017669 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:21:50.017676 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:21:50.017686 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:21:50.017692 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:21:50.017699 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:21:50.017706 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:21:50.017712 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:21:50.017719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:21:50.017725 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:21:50.017732 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:21:50.017739 kernel: ACPI: Interpreter enabled Mar 7 01:21:50.017748 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:21:50.017755 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:21:50.017761 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:21:50.017768 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:21:50.017775 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:21:50.017782 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:21:50.020308 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:21:50.020462 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:21:50.020602 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:21:50.020612 kernel: PCI host bridge to bus 0000:00 Mar 7 01:21:50.020751 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:21:50.020868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:21:50.021010 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:21:50.021128 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:21:50.021243 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:21:50.021365 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:21:50.021480 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:21:50.021816 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:21:50.023035 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:21:50.023173 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:21:50.023301 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:21:50.023433 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:21:50.023558 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:21:50.023720 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:21:50.023851 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:21:50.023998 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:21:50.024128 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:21:50.024264 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:21:50.024396 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:21:50.024520 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:21:50.024644 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:21:50.024767 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:21:50.024901 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:21:50.027074 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:21:50.027218 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:21:50.027380 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:21:50.027531 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:21:50.027694 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:21:50.027826 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:21:50.027836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:21:50.027844 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:21:50.027851 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:21:50.027863 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:21:50.027870 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:21:50.027877 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:21:50.027884 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:21:50.027891 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:21:50.027898 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:21:50.029933 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:21:50.029944 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:21:50.029953 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:21:50.029964 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:21:50.029971 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:21:50.029978 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:21:50.029985 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:21:50.029993 kernel: iommu: Default domain type: Translated Mar 7 01:21:50.030000 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:21:50.030007 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:21:50.030014 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:21:50.030022 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:21:50.030032 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:21:50.030173 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:21:50.030301 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:21:50.030427 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:21:50.030436 kernel: vgaarb: loaded Mar 7 01:21:50.030443 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:21:50.030639 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:21:50.030646 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:21:50.030657 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:21:50.030664 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:21:50.030671 kernel: pnp: PnP ACPI init Mar 7 01:21:50.030809 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:21:50.030819 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:21:50.030827 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:21:50.030834 kernel: NET: Registered PF_INET protocol family Mar 7 01:21:50.030841 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:21:50.030851 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:21:50.030858 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:21:50.030865 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:21:50.030872 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:21:50.030879 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:21:50.030886 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.030893 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.030900 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:21:50.031007 kernel: NET: Registered PF_XDP protocol family Mar 7 01:21:50.031139 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:21:50.031255 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:21:50.031368 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:21:50.031481 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:21:50.031595 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:21:50.031708 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:21:50.031717 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:21:50.031724 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:21:50.031735 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:21:50.031742 kernel: Initialise system trusted keyrings Mar 7 01:21:50.031749 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:21:50.031756 kernel: Key type asymmetric registered Mar 7 01:21:50.031763 kernel: Asymmetric key parser 'x509' registered Mar 7 01:21:50.031770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:21:50.031777 kernel: io scheduler mq-deadline registered Mar 7 01:21:50.031784 kernel: io scheduler kyber registered Mar 7 01:21:50.031791 kernel: io scheduler bfq registered Mar 7 01:21:50.031798 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:21:50.031808 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:21:50.031815 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:21:50.031822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:21:50.031829 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:21:50.031836 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:21:50.031843 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:21:50.031850 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:21:50.031995 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:21:50.032011 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:21:50.032131 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:21:50.032249 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:21:49 UTC (1772846509) Mar 7 01:21:50.032368 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:21:50.032377 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:21:50.032384 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:21:50.032391 kernel: Segment Routing with IPv6 Mar 7 01:21:50.032398 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:21:50.032408 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:21:50.032415 kernel: Key type dns_resolver registered Mar 7 01:21:50.032422 kernel: IPI shorthand broadcast: enabled Mar 7 01:21:50.032429 kernel: sched_clock: Marking stable (878004776, 322868488)->(1349903175, -149029911) Mar 7 01:21:50.032436 kernel: registered taskstats version 1 Mar 7 01:21:50.032443 kernel: Loading compiled-in X.509 certificates Mar 7 01:21:50.032450 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:21:50.032456 kernel: Key type .fscrypt registered Mar 7 01:21:50.032463 kernel: Key type fscrypt-provisioning registered Mar 7 01:21:50.032473 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:21:50.032480 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:21:50.032486 kernel: ima: No architecture policies found Mar 7 01:21:50.032493 kernel: clk: Disabling unused clocks Mar 7 01:21:50.032500 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:21:50.032507 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:21:50.032514 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:21:50.032521 kernel: Run /init as init process Mar 7 01:21:50.032527 kernel: with arguments: Mar 7 01:21:50.032537 kernel: /init Mar 7 01:21:50.032544 kernel: with environment: Mar 7 01:21:50.032550 kernel: HOME=/ Mar 7 01:21:50.032557 kernel: TERM=linux Mar 7 01:21:50.032566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:21:50.032575 systemd[1]: Detected virtualization kvm. Mar 7 01:21:50.032583 systemd[1]: Detected architecture x86-64. Mar 7 01:21:50.032590 systemd[1]: Running in initrd. Mar 7 01:21:50.032600 systemd[1]: No hostname configured, using default hostname. Mar 7 01:21:50.032607 systemd[1]: Hostname set to . Mar 7 01:21:50.032615 systemd[1]: Initializing machine ID from random generator. Mar 7 01:21:50.032622 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:21:50.032630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:21:50.032653 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:21:50.032667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:21:50.032675 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:21:50.032683 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:21:50.032690 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:21:50.032699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:21:50.032707 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:21:50.032717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:21:50.032725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:21:50.032733 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:21:50.032740 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:21:50.032748 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:21:50.032755 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:21:50.032763 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:21:50.032770 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:21:50.032778 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:21:50.032789 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:21:50.032796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:21:50.032804 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:21:50.032811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:21:50.032819 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:21:50.032826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:21:50.032834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:21:50.032841 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:21:50.032849 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:21:50.032859 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:21:50.032867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:21:50.032874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:21:50.032882 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:21:50.034462 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:21:50.034495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:21:50.034504 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:21:50.034516 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:21:50.034524 systemd-journald[178]: Journal started Mar 7 01:21:50.034541 systemd-journald[178]: Runtime Journal (/run/log/journal/f0644fe47f5f4427ae1008cbc67cf674) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:21:50.002429 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:21:50.133571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:21:50.133597 kernel: Bridge firewalling registered Mar 7 01:21:50.045875 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:21:50.138965 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:21:50.140041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:21:50.142232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:50.143704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:21:50.152062 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:21:50.154236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:21:50.159062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:21:50.175050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:21:50.188893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:21:50.195477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:50.200067 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:21:50.206052 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:21:50.209049 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:21:50.211191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:21:50.223309 dracut-cmdline[210]: dracut-dracut-053 Mar 7 01:21:50.226392 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:21:50.254008 systemd-resolved[211]: Positive Trust Anchors: Mar 7 01:21:50.255279 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:21:50.255311 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:21:50.259191 systemd-resolved[211]: Defaulting to hostname 'linux'. Mar 7 01:21:50.263201 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:21:50.264498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:21:50.316966 kernel: SCSI subsystem initialized Mar 7 01:21:50.326943 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:21:50.338958 kernel: iscsi: registered transport (tcp) Mar 7 01:21:50.362058 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:21:50.362131 kernel: QLogic iSCSI HBA Driver Mar 7 01:21:50.422733 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:21:50.432060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:21:50.468275 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:21:50.468361 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:21:50.469938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:21:50.519952 kernel: raid6: avx2x4 gen() 25958 MB/s Mar 7 01:21:50.537944 kernel: raid6: avx2x2 gen() 22133 MB/s Mar 7 01:21:50.556078 kernel: raid6: avx2x1 gen() 19343 MB/s Mar 7 01:21:50.556108 kernel: raid6: using algorithm avx2x4 gen() 25958 MB/s Mar 7 01:21:50.576280 kernel: raid6: .... xor() 3158 MB/s, rmw enabled Mar 7 01:21:50.576307 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:21:50.598943 kernel: xor: automatically using best checksumming function avx Mar 7 01:21:50.743972 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:21:50.759784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:21:50.767103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:21:50.793468 systemd-udevd[395]: Using default interface naming scheme 'v255'. Mar 7 01:21:50.798264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:21:50.809199 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:21:50.827402 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 7 01:21:50.869944 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:21:50.877063 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:21:50.949389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:21:50.959457 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:21:50.974394 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:21:50.977857 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:21:50.980562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:21:50.982303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:21:50.993108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:21:51.005377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:21:51.034939 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:21:51.270505 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:21:51.270641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:51.291695 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:21:51.291901 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:21:51.291959 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:21:51.291972 kernel: AES CTR mode by8 optimization enabled Mar 7 01:21:51.271844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:21:51.276006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:21:51.276122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:51.290336 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:21:51.300207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:21:51.309932 kernel: libata version 3.00 loaded. Mar 7 01:21:51.330965 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:21:51.331236 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:21:51.332955 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:21:51.333154 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:21:51.335940 kernel: scsi host1: ahci Mar 7 01:21:51.336137 kernel: scsi host2: ahci Mar 7 01:21:51.338119 kernel: scsi host3: ahci Mar 7 01:21:51.338299 kernel: scsi host4: ahci Mar 7 01:21:51.342237 kernel: scsi host5: ahci Mar 7 01:21:51.342446 kernel: scsi host6: ahci Mar 7 01:21:51.344120 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Mar 7 01:21:51.344159 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Mar 7 01:21:51.344171 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Mar 7 01:21:51.344182 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Mar 7 01:21:51.344193 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Mar 7 01:21:51.344203 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Mar 7 01:21:51.462417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:51.470055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:21:51.491035 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:51.655935 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.656065 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.659279 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.659923 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.664937 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.664960 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.678376 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:21:51.681933 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:21:51.682127 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:21:51.706266 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:21:51.706481 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:21:51.715874 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:21:51.715966 kernel: GPT:9289727 != 167739391 Mar 7 01:21:51.715982 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:21:51.719703 kernel: GPT:9289727 != 167739391 Mar 7 01:21:51.719725 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:21:51.722225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:51.726933 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:21:51.761736 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (460) Mar 7 01:21:51.765384 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:21:51.766429 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (458) Mar 7 01:21:51.775630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:21:51.785294 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:21:51.790714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:21:51.791576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:21:51.805040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:21:51.811796 disk-uuid[574]: Primary Header is updated. Mar 7 01:21:51.811796 disk-uuid[574]: Secondary Entries is updated. Mar 7 01:21:51.811796 disk-uuid[574]: Secondary Header is updated. Mar 7 01:21:51.818946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:51.826944 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:52.830008 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:52.831465 disk-uuid[575]: The operation has completed successfully. Mar 7 01:21:52.881884 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:21:52.882063 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:21:52.901073 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:21:52.919848 sh[589]: Success Mar 7 01:21:52.934976 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:21:52.992846 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:21:52.994997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:21:53.001917 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:21:53.023229 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:21:53.023275 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:53.026482 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:21:53.032566 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:21:53.032588 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:21:53.042948 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:21:53.045173 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:21:53.046538 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:21:53.056067 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:21:53.060129 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:21:53.076498 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:53.076575 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:53.079220 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:21:53.088755 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:21:53.088822 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:21:53.103171 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:21:53.107370 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:53.115938 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:21:53.124116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:21:53.207368 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:21:53.211288 ignition[699]: Ignition 2.19.0 Mar 7 01:21:53.211301 ignition[699]: Stage: fetch-offline Mar 7 01:21:53.215835 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:21:53.211350 ignition[699]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:53.218531 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:21:53.211365 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:53.211471 ignition[699]: parsed url from cmdline: "" Mar 7 01:21:53.211476 ignition[699]: no config URL provided Mar 7 01:21:53.211481 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:21:53.211491 ignition[699]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:21:53.211497 ignition[699]: failed to fetch config: resource requires networking Mar 7 01:21:53.214139 ignition[699]: Ignition finished successfully Mar 7 01:21:53.245426 systemd-networkd[775]: lo: Link UP Mar 7 01:21:53.245435 systemd-networkd[775]: lo: Gained carrier Mar 7 01:21:53.247472 systemd-networkd[775]: Enumeration completed Mar 7 01:21:53.247939 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:21:53.247943 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:21:53.249023 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:21:53.250652 systemd-networkd[775]: eth0: Link UP Mar 7 01:21:53.250657 systemd-networkd[775]: eth0: Gained carrier Mar 7 01:21:53.250665 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:21:53.251062 systemd[1]: Reached target network.target - Network. Mar 7 01:21:53.259152 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:21:53.274693 ignition[779]: Ignition 2.19.0 Mar 7 01:21:53.275688 ignition[779]: Stage: fetch Mar 7 01:21:53.275888 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:53.275902 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:53.276009 ignition[779]: parsed url from cmdline: "" Mar 7 01:21:53.276015 ignition[779]: no config URL provided Mar 7 01:21:53.276021 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:21:53.276031 ignition[779]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:21:53.276049 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:21:53.276219 ignition[779]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:21:53.476365 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:21:53.476612 ignition[779]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:21:53.877414 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:21:53.877561 ignition[779]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:21:54.035176 systemd-networkd[775]: eth0: DHCPv4 address 172.232.28.122/24, gateway 172.232.28.1 acquired from 23.194.118.58 Mar 7 01:21:54.677652 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:21:54.774749 ignition[779]: PUT result: OK Mar 7 01:21:54.774811 ignition[779]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:21:54.890158 ignition[779]: GET result: OK Mar 7 01:21:54.890435 ignition[779]: parsing config with SHA512: 5234111699c6ace73e9ad48dbe294975e8912067312861ddea1160d1e69c1d4a441f12cc4a2f47e903f7c381894e48be1668bba2f2b45c998b0686d48e66ef0a Mar 7 01:21:54.896042 unknown[779]: fetched base config from "system" Mar 7 01:21:54.896066 unknown[779]: fetched base config from "system" Mar 7 01:21:54.896565 ignition[779]: fetch: fetch complete Mar 7 01:21:54.896077 unknown[779]: fetched user config from "akamai" Mar 7 01:21:54.896574 ignition[779]: fetch: fetch passed Mar 7 01:21:54.896823 ignition[779]: Ignition finished successfully Mar 7 01:21:54.901737 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:21:54.908078 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:21:54.923737 ignition[786]: Ignition 2.19.0 Mar 7 01:21:54.923752 ignition[786]: Stage: kargs Mar 7 01:21:54.925452 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:54.925466 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:54.926352 ignition[786]: kargs: kargs passed Mar 7 01:21:54.928345 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:21:54.926396 ignition[786]: Ignition finished successfully Mar 7 01:21:54.937122 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:21:54.942045 systemd-networkd[775]: eth0: Gained IPv6LL Mar 7 01:21:54.961945 ignition[792]: Ignition 2.19.0 Mar 7 01:21:54.961972 ignition[792]: Stage: disks Mar 7 01:21:54.962223 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:54.962238 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:54.967663 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:21:54.963476 ignition[792]: disks: disks passed Mar 7 01:21:54.963527 ignition[792]: Ignition finished successfully Mar 7 01:21:54.990399 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:21:54.992089 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:21:54.994013 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:21:54.995806 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:21:54.997785 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:21:55.006072 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:21:55.025526 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:21:55.029445 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:21:55.036028 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:21:55.128971 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:21:55.129479 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:21:55.130973 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:21:55.140069 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:21:55.144183 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:21:55.146643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:21:55.147424 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:21:55.147496 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:21:55.152415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:21:55.157956 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (808) Mar 7 01:21:55.164419 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:55.164448 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:55.167939 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:21:55.170092 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:21:55.177936 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:21:55.177974 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:21:55.180690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:21:55.218619 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:21:55.224534 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:21:55.228551 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:21:55.234220 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:21:55.317943 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:21:55.325984 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:21:55.331037 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:21:55.336834 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:21:55.341941 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:55.358273 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:21:55.366418 ignition[920]: INFO : Ignition 2.19.0 Mar 7 01:21:55.366418 ignition[920]: INFO : Stage: mount Mar 7 01:21:55.369103 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:55.369103 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:55.369103 ignition[920]: INFO : mount: mount passed Mar 7 01:21:55.369103 ignition[920]: INFO : Ignition finished successfully Mar 7 01:21:55.369448 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:21:55.380053 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:21:56.135030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:21:56.148932 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Mar 7 01:21:56.148971 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:56.152068 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:56.156795 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:21:56.162049 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:21:56.162073 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:21:56.166575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:21:56.186460 ignition[950]: INFO : Ignition 2.19.0 Mar 7 01:21:56.186460 ignition[950]: INFO : Stage: files Mar 7 01:21:56.188670 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:56.188670 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:56.188670 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:21:56.188670 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:21:56.188670 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:21:56.194379 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:21:56.194379 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:21:56.194379 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:21:56.194107 unknown[950]: wrote ssh authorized keys file for user: core Mar 7 01:21:56.198982 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:21:56.198982 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:21:56.498725 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:21:56.586072 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:56.603228 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:21:57.018060 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:21:57.579796 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:57.579796 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:21:57.582534 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:21:57.582534 ignition[950]: INFO : files: files passed Mar 7 01:21:57.582534 ignition[950]: INFO : Ignition finished successfully Mar 7 01:21:57.584710 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:21:57.616140 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:21:57.621046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:21:57.622497 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:21:57.622636 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:21:57.638027 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:21:57.639587 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:21:57.641956 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:21:57.642785 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:21:57.645377 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:21:57.651128 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:21:57.679570 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:21:57.679882 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:21:57.681426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:21:57.682371 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:21:57.683445 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:21:57.690096 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:21:57.707392 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:21:57.715057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:21:57.726162 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:21:57.728035 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:21:57.728967 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:21:57.730605 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:21:57.730715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:21:57.732636 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:21:57.733749 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:21:57.735433 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:21:57.736981 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:21:57.738617 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:21:57.740348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:21:57.742050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:21:57.743857 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:21:57.745551 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:21:57.747243 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:21:57.748854 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:21:57.748979 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:21:57.750876 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:21:57.752017 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:21:57.753524 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:21:57.755000 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:21:57.756257 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:21:57.756355 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:21:57.758473 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:21:57.758581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:21:57.759630 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:21:57.759728 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:21:57.771047 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:21:57.775037 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:21:57.776777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:21:57.777855 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:21:57.780101 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:21:57.780237 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:21:57.786167 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:21:57.787270 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:21:57.798209 ignition[1003]: INFO : Ignition 2.19.0 Mar 7 01:21:57.799218 ignition[1003]: INFO : Stage: umount Mar 7 01:21:57.800305 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:57.802373 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:57.802373 ignition[1003]: INFO : umount: umount passed Mar 7 01:21:57.802373 ignition[1003]: INFO : Ignition finished successfully Mar 7 01:21:57.808299 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:21:57.809318 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:21:57.812786 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:21:57.813293 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:21:57.813350 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:21:57.814369 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:21:57.814424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:21:57.816573 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:21:57.816625 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:21:57.818343 systemd[1]: Stopped target network.target - Network. Mar 7 01:21:57.819780 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:21:57.819838 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:21:57.821523 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:21:57.822964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:21:57.848999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:21:57.850148 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:21:57.851827 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:21:57.853403 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:21:57.853663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:21:57.855486 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:21:57.855535 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:21:57.857247 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:21:57.857300 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:21:57.859157 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:21:57.859208 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:21:57.861242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:21:57.862628 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:21:57.864461 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:21:57.864576 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:21:57.866673 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:21:57.866754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:21:57.867026 systemd-networkd[775]: eth0: DHCPv6 lease lost Mar 7 01:21:57.869421 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:21:57.869543 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:21:57.874049 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:21:57.874179 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:21:57.878404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:21:57.878469 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:21:57.886985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:21:57.890141 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:21:57.890251 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:21:57.892139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:21:57.892194 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:21:57.893842 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:21:57.893896 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:21:57.895384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:21:57.895437 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:21:57.897215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:21:57.913101 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:21:57.913221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:21:57.920813 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:21:57.921057 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:21:57.922871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:21:57.922966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:21:57.924265 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:21:57.924306 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:21:57.926046 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:21:57.926103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:21:57.928498 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:21:57.928558 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:21:57.930156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:21:57.930212 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:57.937060 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:21:57.937849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:21:57.937922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:21:57.938703 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:21:57.938754 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:21:57.942998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:21:57.943051 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:21:57.944226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:21:57.944277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:57.946371 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:21:57.946477 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:21:57.947935 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:21:57.958048 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:21:57.964555 systemd[1]: Switching root. Mar 7 01:21:57.995104 systemd-journald[178]: Journal stopped Mar 7 01:21:50.015945 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:21:50.015968 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:21:50.015976 kernel: BIOS-provided physical RAM map: Mar 7 01:21:50.015983 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:21:50.015988 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:21:50.015997 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:21:50.016004 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:21:50.016010 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:21:50.016015 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:21:50.016021 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:21:50.016027 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:21:50.016033 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:21:50.016038 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:21:50.016047 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:21:50.016054 kernel: NX (Execute Disable) protection: active Mar 7 01:21:50.016060 kernel: APIC: Static calls initialized Mar 7 01:21:50.016066 kernel: SMBIOS 2.8 present. Mar 7 01:21:50.016072 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:21:50.016078 kernel: Hypervisor detected: KVM Mar 7 01:21:50.016087 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:21:50.016093 kernel: kvm-clock: using sched offset of 5927565045 cycles Mar 7 01:21:50.016099 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:21:50.016106 kernel: tsc: Detected 1999.999 MHz processor Mar 7 01:21:50.016112 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:21:50.016119 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:21:50.016125 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:21:50.016131 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:21:50.016137 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:21:50.016146 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:21:50.016152 kernel: Using GB pages for direct mapping Mar 7 01:21:50.016158 kernel: ACPI: Early table checksum verification disabled Mar 7 01:21:50.016164 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:21:50.016170 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016176 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016183 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016189 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:21:50.016195 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016203 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016209 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016216 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:21:50.016226 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:21:50.016233 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:21:50.016239 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:21:50.016248 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:21:50.016255 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:21:50.016261 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:21:50.016267 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:21:50.016274 kernel: No NUMA configuration found Mar 7 01:21:50.016280 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:21:50.016286 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Mar 7 01:21:50.016293 kernel: Zone ranges: Mar 7 01:21:50.016302 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:21:50.016308 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:21:50.016315 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:21:50.016321 kernel: Movable zone start for each node Mar 7 01:21:50.016327 kernel: Early memory node ranges Mar 7 01:21:50.016334 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:21:50.016340 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:21:50.016347 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:21:50.016353 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:21:50.016360 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:21:50.016369 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:21:50.016375 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:21:50.016381 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:21:50.016388 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:21:50.016394 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:21:50.016401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:21:50.016407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:21:50.016414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:21:50.016420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:21:50.016429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:21:50.016435 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:21:50.016442 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:21:50.016449 kernel: TSC deadline timer available Mar 7 01:21:50.016455 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:21:50.016462 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:21:50.016468 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:21:50.016474 kernel: kvm-guest: setup PV sched yield Mar 7 01:21:50.016481 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:21:50.016490 kernel: Booting paravirtualized kernel on KVM Mar 7 01:21:50.016497 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:21:50.016503 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:21:50.016509 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:21:50.016516 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:21:50.016522 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:21:50.016528 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:21:50.016719 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:21:50.016727 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:21:50.016736 kernel: random: crng init done Mar 7 01:21:50.016742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:21:50.016749 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:21:50.016755 kernel: Fallback order for Node 0: 0 Mar 7 01:21:50.016762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:21:50.016768 kernel: Policy zone: Normal Mar 7 01:21:50.016774 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:21:50.016781 kernel: software IO TLB: area num 2. Mar 7 01:21:50.016790 kernel: Memory: 3966208K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227304K reserved, 0K cma-reserved) Mar 7 01:21:50.016796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:21:50.016802 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:21:50.016809 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:21:50.016816 kernel: Dynamic Preempt: voluntary Mar 7 01:21:50.016822 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:21:50.016829 kernel: rcu: RCU event tracing is enabled. Mar 7 01:21:50.016836 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:21:50.016843 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:21:50.016852 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:21:50.016859 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:21:50.016865 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:21:50.016872 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:21:50.016878 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:21:50.016884 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:21:50.016891 kernel: Console: colour VGA+ 80x25 Mar 7 01:21:50.016897 kernel: printk: console [tty0] enabled Mar 7 01:21:50.016903 kernel: printk: console [ttyS0] enabled Mar 7 01:21:50.016932 kernel: ACPI: Core revision 20230628 Mar 7 01:21:50.016939 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:21:50.016946 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:21:50.016952 kernel: x2apic enabled Mar 7 01:21:50.016967 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:21:50.016977 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:21:50.016984 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:21:50.016990 kernel: kvm-guest: setup PV IPIs Mar 7 01:21:50.016997 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:21:50.017004 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:21:50.017010 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Mar 7 01:21:50.017017 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:21:50.017026 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:21:50.017033 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:21:50.017040 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:21:50.017046 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:21:50.017053 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:21:50.017063 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:21:50.017070 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:21:50.017077 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:21:50.017083 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:21:50.017091 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:21:50.017097 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:21:50.017104 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:21:50.017111 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:21:50.017120 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:21:50.017127 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:21:50.017134 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:21:50.017141 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:21:50.017147 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:21:50.017154 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:21:50.017161 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:21:50.017168 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:21:50.017174 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:21:50.017184 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:21:50.017191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:21:50.017197 kernel: landlock: Up and running. Mar 7 01:21:50.017204 kernel: SELinux: Initializing. Mar 7 01:21:50.017211 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.017217 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.017224 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:21:50.017231 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:21:50.017238 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:21:50.017247 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:21:50.017254 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:21:50.017261 kernel: ... version: 0 Mar 7 01:21:50.017267 kernel: ... bit width: 48 Mar 7 01:21:50.017274 kernel: ... generic registers: 6 Mar 7 01:21:50.017281 kernel: ... value mask: 0000ffffffffffff Mar 7 01:21:50.017287 kernel: ... max period: 00007fffffffffff Mar 7 01:21:50.017294 kernel: ... fixed-purpose events: 0 Mar 7 01:21:50.017301 kernel: ... event mask: 000000000000003f Mar 7 01:21:50.017310 kernel: signal: max sigframe size: 3376 Mar 7 01:21:50.017317 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:21:50.017323 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:21:50.017330 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:21:50.017337 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:21:50.017343 kernel: .... node #0, CPUs: #1 Mar 7 01:21:50.017350 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:21:50.017357 kernel: smpboot: Max logical packages: 1 Mar 7 01:21:50.017363 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 7 01:21:50.017372 kernel: devtmpfs: initialized Mar 7 01:21:50.017379 kernel: x86/mm: Memory block size: 128MB Mar 7 01:21:50.017386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:21:50.017393 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:21:50.017399 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:21:50.017406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:21:50.017413 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:21:50.017420 kernel: audit: type=2000 audit(1772846509.129:1): state=initialized audit_enabled=0 res=1 Mar 7 01:21:50.017427 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:21:50.017436 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:21:50.017443 kernel: cpuidle: using governor menu Mar 7 01:21:50.017636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:21:50.017642 kernel: dca service started, version 1.12.1 Mar 7 01:21:50.017649 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:21:50.017656 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:21:50.017663 kernel: PCI: Using configuration type 1 for base access Mar 7 01:21:50.017669 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:21:50.017676 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:21:50.017686 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:21:50.017692 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:21:50.017699 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:21:50.017706 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:21:50.017712 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:21:50.017719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:21:50.017725 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:21:50.017732 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:21:50.017739 kernel: ACPI: Interpreter enabled Mar 7 01:21:50.017748 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:21:50.017755 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:21:50.017761 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:21:50.017768 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:21:50.017775 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:21:50.017782 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:21:50.020308 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:21:50.020462 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:21:50.020602 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:21:50.020612 kernel: PCI host bridge to bus 0000:00 Mar 7 01:21:50.020751 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:21:50.020868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:21:50.021010 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:21:50.021128 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:21:50.021243 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:21:50.021365 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:21:50.021480 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:21:50.021816 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:21:50.023035 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:21:50.023173 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:21:50.023301 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:21:50.023433 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:21:50.023558 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:21:50.023720 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:21:50.023851 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:21:50.023998 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:21:50.024128 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:21:50.024264 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:21:50.024396 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:21:50.024520 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:21:50.024644 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:21:50.024767 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:21:50.024901 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:21:50.027074 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:21:50.027218 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:21:50.027380 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:21:50.027531 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:21:50.027694 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:21:50.027826 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:21:50.027836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:21:50.027844 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:21:50.027851 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:21:50.027863 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:21:50.027870 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:21:50.027877 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:21:50.027884 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:21:50.027891 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:21:50.027898 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:21:50.029933 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:21:50.029944 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:21:50.029953 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:21:50.029964 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:21:50.029971 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:21:50.029978 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:21:50.029985 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:21:50.029993 kernel: iommu: Default domain type: Translated Mar 7 01:21:50.030000 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:21:50.030007 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:21:50.030014 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:21:50.030022 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:21:50.030032 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:21:50.030173 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:21:50.030301 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:21:50.030427 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:21:50.030436 kernel: vgaarb: loaded Mar 7 01:21:50.030443 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:21:50.030639 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:21:50.030646 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:21:50.030657 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:21:50.030664 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:21:50.030671 kernel: pnp: PnP ACPI init Mar 7 01:21:50.030809 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:21:50.030819 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:21:50.030827 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:21:50.030834 kernel: NET: Registered PF_INET protocol family Mar 7 01:21:50.030841 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:21:50.030851 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:21:50.030858 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:21:50.030865 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:21:50.030872 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:21:50.030879 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:21:50.030886 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.030893 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:21:50.030900 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:21:50.031007 kernel: NET: Registered PF_XDP protocol family Mar 7 01:21:50.031139 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:21:50.031255 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:21:50.031368 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:21:50.031481 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:21:50.031595 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:21:50.031708 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:21:50.031717 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:21:50.031724 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:21:50.031735 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:21:50.031742 kernel: Initialise system trusted keyrings Mar 7 01:21:50.031749 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:21:50.031756 kernel: Key type asymmetric registered Mar 7 01:21:50.031763 kernel: Asymmetric key parser 'x509' registered Mar 7 01:21:50.031770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:21:50.031777 kernel: io scheduler mq-deadline registered Mar 7 01:21:50.031784 kernel: io scheduler kyber registered Mar 7 01:21:50.031791 kernel: io scheduler bfq registered Mar 7 01:21:50.031798 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:21:50.031808 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:21:50.031815 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:21:50.031822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:21:50.031829 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:21:50.031836 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:21:50.031843 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:21:50.031850 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:21:50.031995 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:21:50.032011 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:21:50.032131 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:21:50.032249 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:21:49 UTC (1772846509) Mar 7 01:21:50.032368 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:21:50.032377 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:21:50.032384 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:21:50.032391 kernel: Segment Routing with IPv6 Mar 7 01:21:50.032398 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:21:50.032408 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:21:50.032415 kernel: Key type dns_resolver registered Mar 7 01:21:50.032422 kernel: IPI shorthand broadcast: enabled Mar 7 01:21:50.032429 kernel: sched_clock: Marking stable (878004776, 322868488)->(1349903175, -149029911) Mar 7 01:21:50.032436 kernel: registered taskstats version 1 Mar 7 01:21:50.032443 kernel: Loading compiled-in X.509 certificates Mar 7 01:21:50.032450 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:21:50.032456 kernel: Key type .fscrypt registered Mar 7 01:21:50.032463 kernel: Key type fscrypt-provisioning registered Mar 7 01:21:50.032473 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:21:50.032480 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:21:50.032486 kernel: ima: No architecture policies found Mar 7 01:21:50.032493 kernel: clk: Disabling unused clocks Mar 7 01:21:50.032500 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:21:50.032507 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:21:50.032514 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:21:50.032521 kernel: Run /init as init process Mar 7 01:21:50.032527 kernel: with arguments: Mar 7 01:21:50.032537 kernel: /init Mar 7 01:21:50.032544 kernel: with environment: Mar 7 01:21:50.032550 kernel: HOME=/ Mar 7 01:21:50.032557 kernel: TERM=linux Mar 7 01:21:50.032566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:21:50.032575 systemd[1]: Detected virtualization kvm. Mar 7 01:21:50.032583 systemd[1]: Detected architecture x86-64. Mar 7 01:21:50.032590 systemd[1]: Running in initrd. Mar 7 01:21:50.032600 systemd[1]: No hostname configured, using default hostname. Mar 7 01:21:50.032607 systemd[1]: Hostname set to . Mar 7 01:21:50.032615 systemd[1]: Initializing machine ID from random generator. Mar 7 01:21:50.032622 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:21:50.032630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:21:50.032653 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:21:50.032667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:21:50.032675 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:21:50.032683 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:21:50.032690 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:21:50.032699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:21:50.032707 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:21:50.032717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:21:50.032725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:21:50.032733 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:21:50.032740 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:21:50.032748 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:21:50.032755 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:21:50.032763 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:21:50.032770 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:21:50.032778 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:21:50.032789 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:21:50.032796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:21:50.032804 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:21:50.032811 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:21:50.032819 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:21:50.032826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:21:50.032834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:21:50.032841 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:21:50.032849 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:21:50.032859 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:21:50.032867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:21:50.032874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:21:50.032882 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:21:50.034462 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:21:50.034495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:21:50.034504 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:21:50.034516 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:21:50.034524 systemd-journald[178]: Journal started Mar 7 01:21:50.034541 systemd-journald[178]: Runtime Journal (/run/log/journal/f0644fe47f5f4427ae1008cbc67cf674) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:21:50.002429 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:21:50.133571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:21:50.133597 kernel: Bridge firewalling registered Mar 7 01:21:50.045875 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:21:50.138965 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:21:50.140041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:21:50.142232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:50.143704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:21:50.152062 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:21:50.154236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:21:50.159062 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:21:50.175050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:21:50.188893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:21:50.195477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:50.200067 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:21:50.206052 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:21:50.209049 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:21:50.211191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:21:50.223309 dracut-cmdline[210]: dracut-dracut-053 Mar 7 01:21:50.226392 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:21:50.254008 systemd-resolved[211]: Positive Trust Anchors: Mar 7 01:21:50.255279 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:21:50.255311 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:21:50.259191 systemd-resolved[211]: Defaulting to hostname 'linux'. Mar 7 01:21:50.263201 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:21:50.264498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:21:50.316966 kernel: SCSI subsystem initialized Mar 7 01:21:50.326943 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:21:50.338958 kernel: iscsi: registered transport (tcp) Mar 7 01:21:50.362058 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:21:50.362131 kernel: QLogic iSCSI HBA Driver Mar 7 01:21:50.422733 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:21:50.432060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:21:50.468275 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:21:50.468361 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:21:50.469938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:21:50.519952 kernel: raid6: avx2x4 gen() 25958 MB/s Mar 7 01:21:50.537944 kernel: raid6: avx2x2 gen() 22133 MB/s Mar 7 01:21:50.556078 kernel: raid6: avx2x1 gen() 19343 MB/s Mar 7 01:21:50.556108 kernel: raid6: using algorithm avx2x4 gen() 25958 MB/s Mar 7 01:21:50.576280 kernel: raid6: .... xor() 3158 MB/s, rmw enabled Mar 7 01:21:50.576307 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:21:50.598943 kernel: xor: automatically using best checksumming function avx Mar 7 01:21:50.743972 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:21:50.759784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:21:50.767103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:21:50.793468 systemd-udevd[395]: Using default interface naming scheme 'v255'. Mar 7 01:21:50.798264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:21:50.809199 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:21:50.827402 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 7 01:21:50.869944 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:21:50.877063 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:21:50.949389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:21:50.959457 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:21:50.974394 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:21:50.977857 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:21:50.980562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:21:50.982303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:21:50.993108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:21:51.005377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:21:51.034939 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:21:51.270505 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:21:51.270641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:51.291695 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:21:51.291901 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:21:51.291959 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:21:51.291972 kernel: AES CTR mode by8 optimization enabled Mar 7 01:21:51.271844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:21:51.276006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:21:51.276122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:51.290336 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:21:51.300207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:21:51.309932 kernel: libata version 3.00 loaded. Mar 7 01:21:51.330965 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:21:51.331236 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:21:51.332955 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:21:51.333154 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:21:51.335940 kernel: scsi host1: ahci Mar 7 01:21:51.336137 kernel: scsi host2: ahci Mar 7 01:21:51.338119 kernel: scsi host3: ahci Mar 7 01:21:51.338299 kernel: scsi host4: ahci Mar 7 01:21:51.342237 kernel: scsi host5: ahci Mar 7 01:21:51.342446 kernel: scsi host6: ahci Mar 7 01:21:51.344120 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Mar 7 01:21:51.344159 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Mar 7 01:21:51.344171 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Mar 7 01:21:51.344182 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Mar 7 01:21:51.344193 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Mar 7 01:21:51.344203 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Mar 7 01:21:51.462417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:51.470055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:21:51.491035 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:51.655935 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.656065 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.659279 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.659923 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.664937 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.664960 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:21:51.678376 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:21:51.681933 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:21:51.682127 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:21:51.706266 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:21:51.706481 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:21:51.715874 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:21:51.715966 kernel: GPT:9289727 != 167739391 Mar 7 01:21:51.715982 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:21:51.719703 kernel: GPT:9289727 != 167739391 Mar 7 01:21:51.719725 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:21:51.722225 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:51.726933 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:21:51.761736 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (460) Mar 7 01:21:51.765384 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:21:51.766429 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (458) Mar 7 01:21:51.775630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:21:51.785294 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:21:51.790714 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:21:51.791576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:21:51.805040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:21:51.811796 disk-uuid[574]: Primary Header is updated. Mar 7 01:21:51.811796 disk-uuid[574]: Secondary Entries is updated. Mar 7 01:21:51.811796 disk-uuid[574]: Secondary Header is updated. Mar 7 01:21:51.818946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:51.826944 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:52.830008 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:21:52.831465 disk-uuid[575]: The operation has completed successfully. Mar 7 01:21:52.881884 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:21:52.882063 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:21:52.901073 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:21:52.919848 sh[589]: Success Mar 7 01:21:52.934976 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:21:52.992846 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:21:52.994997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:21:53.001917 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:21:53.023229 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:21:53.023275 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:53.026482 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:21:53.032566 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:21:53.032588 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:21:53.042948 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:21:53.045173 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:21:53.046538 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:21:53.056067 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:21:53.060129 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:21:53.076498 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:53.076575 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:53.079220 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:21:53.088755 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:21:53.088822 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:21:53.103171 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:21:53.107370 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:53.115938 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:21:53.124116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:21:53.207368 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:21:53.211288 ignition[699]: Ignition 2.19.0 Mar 7 01:21:53.211301 ignition[699]: Stage: fetch-offline Mar 7 01:21:53.215835 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:21:53.211350 ignition[699]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:53.218531 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:21:53.211365 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:53.211471 ignition[699]: parsed url from cmdline: "" Mar 7 01:21:53.211476 ignition[699]: no config URL provided Mar 7 01:21:53.211481 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:21:53.211491 ignition[699]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:21:53.211497 ignition[699]: failed to fetch config: resource requires networking Mar 7 01:21:53.214139 ignition[699]: Ignition finished successfully Mar 7 01:21:53.245426 systemd-networkd[775]: lo: Link UP Mar 7 01:21:53.245435 systemd-networkd[775]: lo: Gained carrier Mar 7 01:21:53.247472 systemd-networkd[775]: Enumeration completed Mar 7 01:21:53.247939 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:21:53.247943 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:21:53.249023 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:21:53.250652 systemd-networkd[775]: eth0: Link UP Mar 7 01:21:53.250657 systemd-networkd[775]: eth0: Gained carrier Mar 7 01:21:53.250665 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:21:53.251062 systemd[1]: Reached target network.target - Network. Mar 7 01:21:53.259152 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:21:53.274693 ignition[779]: Ignition 2.19.0 Mar 7 01:21:53.275688 ignition[779]: Stage: fetch Mar 7 01:21:53.275888 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:53.275902 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:53.276009 ignition[779]: parsed url from cmdline: "" Mar 7 01:21:53.276015 ignition[779]: no config URL provided Mar 7 01:21:53.276021 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:21:53.276031 ignition[779]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:21:53.276049 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:21:53.276219 ignition[779]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:21:53.476365 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:21:53.476612 ignition[779]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:21:53.877414 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:21:53.877561 ignition[779]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:21:54.035176 systemd-networkd[775]: eth0: DHCPv4 address 172.232.28.122/24, gateway 172.232.28.1 acquired from 23.194.118.58 Mar 7 01:21:54.677652 ignition[779]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:21:54.774749 ignition[779]: PUT result: OK Mar 7 01:21:54.774811 ignition[779]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:21:54.890158 ignition[779]: GET result: OK Mar 7 01:21:54.890435 ignition[779]: parsing config with SHA512: 5234111699c6ace73e9ad48dbe294975e8912067312861ddea1160d1e69c1d4a441f12cc4a2f47e903f7c381894e48be1668bba2f2b45c998b0686d48e66ef0a Mar 7 01:21:54.896042 unknown[779]: fetched base config from "system" Mar 7 01:21:54.896066 unknown[779]: fetched base config from "system" Mar 7 01:21:54.896565 ignition[779]: fetch: fetch complete Mar 7 01:21:54.896077 unknown[779]: fetched user config from "akamai" Mar 7 01:21:54.896574 ignition[779]: fetch: fetch passed Mar 7 01:21:54.896823 ignition[779]: Ignition finished successfully Mar 7 01:21:54.901737 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:21:54.908078 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:21:54.923737 ignition[786]: Ignition 2.19.0 Mar 7 01:21:54.923752 ignition[786]: Stage: kargs Mar 7 01:21:54.925452 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:54.925466 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:54.926352 ignition[786]: kargs: kargs passed Mar 7 01:21:54.928345 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:21:54.926396 ignition[786]: Ignition finished successfully Mar 7 01:21:54.937122 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:21:54.942045 systemd-networkd[775]: eth0: Gained IPv6LL Mar 7 01:21:54.961945 ignition[792]: Ignition 2.19.0 Mar 7 01:21:54.961972 ignition[792]: Stage: disks Mar 7 01:21:54.962223 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:54.962238 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:54.967663 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:21:54.963476 ignition[792]: disks: disks passed Mar 7 01:21:54.963527 ignition[792]: Ignition finished successfully Mar 7 01:21:54.990399 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:21:54.992089 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:21:54.994013 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:21:54.995806 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:21:54.997785 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:21:55.006072 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:21:55.025526 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:21:55.029445 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:21:55.036028 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:21:55.128971 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:21:55.129479 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:21:55.130973 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:21:55.140069 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:21:55.144183 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:21:55.146643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:21:55.147424 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:21:55.147496 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:21:55.152415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:21:55.157956 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (808) Mar 7 01:21:55.164419 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:55.164448 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:55.167939 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:21:55.170092 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:21:55.177936 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:21:55.177974 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:21:55.180690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:21:55.218619 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:21:55.224534 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:21:55.228551 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:21:55.234220 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:21:55.317943 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:21:55.325984 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:21:55.331037 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:21:55.336834 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:21:55.341941 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:55.358273 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:21:55.366418 ignition[920]: INFO : Ignition 2.19.0 Mar 7 01:21:55.366418 ignition[920]: INFO : Stage: mount Mar 7 01:21:55.369103 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:55.369103 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:55.369103 ignition[920]: INFO : mount: mount passed Mar 7 01:21:55.369103 ignition[920]: INFO : Ignition finished successfully Mar 7 01:21:55.369448 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:21:55.380053 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:21:56.135030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:21:56.148932 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Mar 7 01:21:56.148971 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:21:56.152068 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:21:56.156795 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:21:56.162049 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:21:56.162073 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:21:56.166575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:21:56.186460 ignition[950]: INFO : Ignition 2.19.0 Mar 7 01:21:56.186460 ignition[950]: INFO : Stage: files Mar 7 01:21:56.188670 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:56.188670 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:56.188670 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:21:56.188670 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:21:56.188670 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:21:56.194379 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:21:56.194379 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:21:56.194379 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:21:56.194107 unknown[950]: wrote ssh authorized keys file for user: core Mar 7 01:21:56.198982 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:21:56.198982 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:21:56.498725 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:21:56.586072 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:56.588046 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:56.603228 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:21:57.018060 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:21:57.579796 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:21:57.579796 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:21:57.582534 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:21:57.582534 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:21:57.582534 ignition[950]: INFO : files: files passed Mar 7 01:21:57.582534 ignition[950]: INFO : Ignition finished successfully Mar 7 01:21:57.584710 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:21:57.616140 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:21:57.621046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:21:57.622497 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:21:57.622636 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:21:57.638027 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:21:57.639587 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:21:57.641956 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:21:57.642785 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:21:57.645377 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:21:57.651128 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:21:57.679570 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:21:57.679882 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:21:57.681426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:21:57.682371 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:21:57.683445 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:21:57.690096 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:21:57.707392 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:21:57.715057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:21:57.726162 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:21:57.728035 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:21:57.728967 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:21:57.730605 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:21:57.730715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:21:57.732636 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:21:57.733749 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:21:57.735433 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:21:57.736981 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:21:57.738617 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:21:57.740348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:21:57.742050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:21:57.743857 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:21:57.745551 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:21:57.747243 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:21:57.748854 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:21:57.748979 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:21:57.750876 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:21:57.752017 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:21:57.753524 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:21:57.755000 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:21:57.756257 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:21:57.756355 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:21:57.758473 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:21:57.758581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:21:57.759630 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:21:57.759728 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:21:57.771047 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:21:57.775037 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:21:57.776777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:21:57.777855 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:21:57.780101 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:21:57.780237 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:21:57.786167 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:21:57.787270 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:21:57.798209 ignition[1003]: INFO : Ignition 2.19.0 Mar 7 01:21:57.799218 ignition[1003]: INFO : Stage: umount Mar 7 01:21:57.800305 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:21:57.802373 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:21:57.802373 ignition[1003]: INFO : umount: umount passed Mar 7 01:21:57.802373 ignition[1003]: INFO : Ignition finished successfully Mar 7 01:21:57.808299 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:21:57.809318 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:21:57.812786 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:21:57.813293 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:21:57.813350 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:21:57.814369 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:21:57.814424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:21:57.816573 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:21:57.816625 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:21:57.818343 systemd[1]: Stopped target network.target - Network. Mar 7 01:21:57.819780 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:21:57.819838 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:21:57.821523 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:21:57.822964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:21:57.848999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:21:57.850148 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:21:57.851827 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:21:57.853403 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:21:57.853663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:21:57.855486 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:21:57.855535 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:21:57.857247 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:21:57.857300 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:21:57.859157 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:21:57.859208 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:21:57.861242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:21:57.862628 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:21:57.864461 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:21:57.864576 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:21:57.866673 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:21:57.866754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:21:57.867026 systemd-networkd[775]: eth0: DHCPv6 lease lost Mar 7 01:21:57.869421 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:21:57.869543 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:21:57.874049 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:21:57.874179 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:21:57.878404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:21:57.878469 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:21:57.886985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:21:57.890141 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:21:57.890251 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:21:57.892139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:21:57.892194 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:21:57.893842 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:21:57.893896 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:21:57.895384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:21:57.895437 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:21:57.897215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:21:57.913101 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:21:57.913221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:21:57.920813 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:21:57.921057 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:21:57.922871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:21:57.922966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:21:57.924265 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:21:57.924306 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:21:57.926046 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:21:57.926103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:21:57.928498 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:21:57.928558 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:21:57.930156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:21:57.930212 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:21:57.937060 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:21:57.937849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:21:57.937922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:21:57.938703 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:21:57.938754 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:21:57.942998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:21:57.943051 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:21:57.944226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:21:57.944277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:21:57.946371 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:21:57.946477 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:21:57.947935 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:21:57.958048 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:21:57.964555 systemd[1]: Switching root. Mar 7 01:21:57.995104 systemd-journald[178]: Journal stopped Mar 7 01:21:59.176368 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Mar 7 01:21:59.176397 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:21:59.176409 kernel: SELinux: policy capability open_perms=1 Mar 7 01:21:59.176419 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:21:59.176431 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:21:59.176440 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:21:59.176450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:21:59.176459 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:21:59.176468 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:21:59.176478 kernel: audit: type=1403 audit(1772846518.161:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:21:59.176488 systemd[1]: Successfully loaded SELinux policy in 52.912ms. Mar 7 01:21:59.176501 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.556ms. Mar 7 01:21:59.176512 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:21:59.176522 systemd[1]: Detected virtualization kvm. Mar 7 01:21:59.176532 systemd[1]: Detected architecture x86-64. Mar 7 01:21:59.176542 systemd[1]: Detected first boot. Mar 7 01:21:59.176746 systemd[1]: Initializing machine ID from random generator. Mar 7 01:21:59.176755 zram_generator::config[1047]: No configuration found. Mar 7 01:21:59.176766 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:21:59.176776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:21:59.176786 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:21:59.176797 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:21:59.176808 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:21:59.176820 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:21:59.176830 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:21:59.176840 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:21:59.176850 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:21:59.176860 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:21:59.176870 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:21:59.176880 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:21:59.176892 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:21:59.176902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:21:59.177943 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:21:59.177957 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:21:59.177968 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:21:59.178167 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:21:59.178177 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:21:59.178187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:21:59.178201 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:21:59.178211 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:21:59.178224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:21:59.178235 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:21:59.178245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:21:59.178257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:21:59.178268 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:21:59.178278 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:21:59.178291 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:21:59.178301 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:21:59.178311 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:21:59.178321 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:21:59.178332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:21:59.178345 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:21:59.178355 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:21:59.178365 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:21:59.178375 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:21:59.178386 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:21:59.178396 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:21:59.178406 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:21:59.178416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:21:59.178429 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:21:59.178439 systemd[1]: Reached target machines.target - Containers. Mar 7 01:21:59.178450 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:21:59.178460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:21:59.178470 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:21:59.178483 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:21:59.178493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:21:59.178503 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:21:59.178516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:21:59.178526 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:21:59.178536 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:21:59.178546 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:21:59.178557 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:21:59.178567 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:21:59.178577 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:21:59.178587 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:21:59.178600 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:21:59.178610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:21:59.178620 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:21:59.178631 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:21:59.178641 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:21:59.178671 systemd-journald[1127]: Collecting audit messages is disabled. Mar 7 01:21:59.178693 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:21:59.178704 systemd[1]: Stopped verity-setup.service. Mar 7 01:21:59.178716 systemd-journald[1127]: Journal started Mar 7 01:21:59.178735 systemd-journald[1127]: Runtime Journal (/run/log/journal/a560c0f6ee4b44ee8def576fe72a642b) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:21:58.787090 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:21:59.185054 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:21:58.811510 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:21:58.812109 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:21:59.193933 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:21:59.196753 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:21:59.203297 kernel: fuse: init (API version 7.39) Mar 7 01:21:59.203330 kernel: ACPI: bus type drm_connector registered Mar 7 01:21:59.200699 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:21:59.201617 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:21:59.202500 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:21:59.204305 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:21:59.205967 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:21:59.207962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:21:59.210211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:21:59.216036 kernel: loop: module loaded Mar 7 01:21:59.213611 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:21:59.213840 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:21:59.215201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:21:59.215385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:21:59.217007 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:21:59.217188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:21:59.218451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:21:59.218634 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:21:59.219871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:21:59.220151 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:21:59.221434 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:21:59.221725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:21:59.223238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:21:59.224694 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:21:59.225981 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:21:59.244294 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:21:59.274460 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:21:59.282015 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:21:59.284018 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:21:59.284051 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:21:59.286165 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:21:59.294091 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:21:59.303154 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:21:59.304681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:21:59.309048 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:21:59.319621 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:21:59.320651 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:21:59.334224 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:21:59.335768 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:21:59.339460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:21:59.351343 systemd-journald[1127]: Time spent on flushing to /var/log/journal/a560c0f6ee4b44ee8def576fe72a642b is 40.281ms for 972 entries. Mar 7 01:21:59.351343 systemd-journald[1127]: System Journal (/var/log/journal/a560c0f6ee4b44ee8def576fe72a642b) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:21:59.423110 systemd-journald[1127]: Received client request to flush runtime journal. Mar 7 01:21:59.354237 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:21:59.356648 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:21:59.361144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:21:59.362255 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:21:59.364345 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:21:59.367364 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:21:59.379360 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:21:59.384748 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:21:59.393097 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:21:59.402112 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:21:59.434422 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:21:59.436086 kernel: loop0: detected capacity change from 0 to 8 Mar 7 01:21:59.453939 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:21:59.481192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:21:59.485815 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:21:59.489065 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:21:59.493290 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 01:21:59.503433 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:21:59.509365 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 7 01:21:59.510461 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 7 01:21:59.537723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:21:59.556950 kernel: loop2: detected capacity change from 0 to 217752 Mar 7 01:21:59.555991 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:21:59.600286 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:21:59.610449 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:21:59.612451 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:21:59.646429 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 7 01:21:59.647036 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 7 01:21:59.653689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:21:59.664718 kernel: loop4: detected capacity change from 0 to 8 Mar 7 01:21:59.673358 kernel: loop5: detected capacity change from 0 to 142488 Mar 7 01:21:59.697787 kernel: loop6: detected capacity change from 0 to 217752 Mar 7 01:21:59.721964 kernel: loop7: detected capacity change from 0 to 140768 Mar 7 01:21:59.739349 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 7 01:21:59.740309 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 7 01:21:59.751710 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:21:59.752063 systemd[1]: Reloading... Mar 7 01:21:59.882947 zram_generator::config[1221]: No configuration found. Mar 7 01:21:59.889189 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:22:00.023611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:22:00.069828 systemd[1]: Reloading finished in 314 ms. Mar 7 01:22:00.100754 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:22:00.102081 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:22:00.103196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:22:00.119083 systemd[1]: Starting ensure-sysext.service... Mar 7 01:22:00.122022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:22:00.126404 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:22:00.131034 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:22:00.131055 systemd[1]: Reloading... Mar 7 01:22:00.161199 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:22:00.161554 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:22:00.164006 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:22:00.164290 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Mar 7 01:22:00.164383 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Mar 7 01:22:00.170586 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:22:00.170604 systemd-tmpfiles[1266]: Skipping /boot Mar 7 01:22:00.195650 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:22:00.195665 systemd-tmpfiles[1266]: Skipping /boot Mar 7 01:22:00.198157 systemd-udevd[1267]: Using default interface naming scheme 'v255'. Mar 7 01:22:00.256937 zram_generator::config[1295]: No configuration found. Mar 7 01:22:00.384986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:22:00.429996 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:22:00.444364 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:22:00.444679 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:22:00.444889 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:22:00.451678 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:22:00.452426 systemd[1]: Reloading finished in 320 ms. Mar 7 01:22:00.456941 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:22:00.475970 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:22:00.481331 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:22:00.484082 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:22:00.519944 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:22:00.529407 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:22:00.535170 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:22:00.546032 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:22:00.556091 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:22:00.563090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:22:00.571118 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:22:00.588989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:22:00.589262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:22:00.594631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:22:00.599983 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:22:00.600654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:22:00.611475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:22:00.613077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:22:00.622028 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:22:00.622771 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:22:00.635053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:22:00.641404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:22:00.641650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:22:00.641891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:22:00.642081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:22:00.644094 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:22:00.654294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:22:00.654544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:22:00.658742 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:22:00.658945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:22:00.669839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:22:00.670570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:22:00.678218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:22:00.682425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:22:00.691388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:22:00.692697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:22:00.692782 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:22:00.694177 systemd[1]: Finished ensure-sysext.service. Mar 7 01:22:00.695973 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:22:00.704437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:22:00.704686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:22:00.704994 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1324) Mar 7 01:22:00.710688 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:22:00.720302 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:22:00.734112 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:22:00.735384 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:22:00.735579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:22:00.739407 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:22:00.746343 augenrules[1406]: No rules Mar 7 01:22:00.749428 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:22:00.764618 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:22:00.773508 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:22:00.774975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:22:00.776671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:22:00.778734 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:22:00.779053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:22:00.782543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:22:00.791107 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:22:00.798359 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:22:00.802151 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:22:00.816715 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:22:00.829048 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:22:00.831991 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:22:00.862931 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:22:00.892968 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:22:00.961700 systemd-resolved[1372]: Positive Trust Anchors: Mar 7 01:22:00.962076 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:22:00.962147 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:22:00.968298 systemd-networkd[1370]: lo: Link UP Mar 7 01:22:00.968317 systemd-networkd[1370]: lo: Gained carrier Mar 7 01:22:00.970989 systemd-resolved[1372]: Defaulting to hostname 'linux'. Mar 7 01:22:00.972205 systemd-networkd[1370]: Enumeration completed Mar 7 01:22:00.972689 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:22:00.972703 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:22:00.973865 systemd-networkd[1370]: eth0: Link UP Mar 7 01:22:00.973880 systemd-networkd[1370]: eth0: Gained carrier Mar 7 01:22:00.973892 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:22:00.989567 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:22:00.990925 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:22:00.991748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:22:00.992925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:22:00.995644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:22:00.996457 systemd[1]: Reached target network.target - Network. Mar 7 01:22:00.997400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:22:00.998208 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:22:00.999105 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:22:00.999990 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:22:01.000788 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:22:01.001645 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:22:01.001682 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:22:01.002400 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:22:01.003432 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:22:01.004333 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:22:01.005130 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:22:01.006784 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:22:01.009542 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:22:01.015307 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:22:01.017516 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:22:01.021094 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:22:01.022544 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:22:01.025490 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:22:01.026252 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:22:01.027124 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:22:01.027163 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:22:01.028046 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:22:01.029008 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:22:01.034233 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:22:01.039079 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:22:01.050005 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:22:01.052477 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:22:01.053816 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:22:01.060248 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:22:01.064144 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:22:01.071039 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:22:01.073714 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:22:01.085964 jq[1443]: false Mar 7 01:22:01.103094 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:22:01.104413 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:22:01.106638 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:22:01.108420 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:22:01.111038 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:22:01.114973 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:22:01.117372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:22:01.117998 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:22:01.121419 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:22:01.121670 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:22:01.145962 coreos-metadata[1441]: Mar 07 01:22:01.144 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:22:01.162229 extend-filesystems[1444]: Found loop4 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found loop5 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found loop6 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found loop7 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda1 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda2 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda3 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found usr Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda4 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda6 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda7 Mar 7 01:22:01.168014 extend-filesystems[1444]: Found sda9 Mar 7 01:22:01.168014 extend-filesystems[1444]: Checking size of /dev/sda9 Mar 7 01:22:01.164125 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:22:01.240803 extend-filesystems[1444]: Resized partition /dev/sda9 Mar 7 01:22:01.194370 dbus-daemon[1442]: [system] SELinux support is enabled Mar 7 01:22:01.244713 update_engine[1453]: I20260307 01:22:01.234076 1453 main.cc:92] Flatcar Update Engine starting Mar 7 01:22:01.244935 tar[1457]: linux-amd64/LICENSE Mar 7 01:22:01.244935 tar[1457]: linux-amd64/helm Mar 7 01:22:01.253382 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 7 01:22:01.194578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:22:01.253537 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:22:01.260078 update_engine[1453]: I20260307 01:22:01.247681 1453 update_check_scheduler.cc:74] Next update check in 7m1s Mar 7 01:22:01.260137 jq[1455]: true Mar 7 01:22:01.199069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:22:01.199101 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:22:01.200211 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:22:01.200229 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:22:01.238693 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:22:01.250052 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:22:01.253002 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:22:01.253236 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:22:01.272500 jq[1477]: true Mar 7 01:22:01.361660 systemd-logind[1450]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 01:22:01.361695 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:22:01.366446 systemd-logind[1450]: New seat seat0. Mar 7 01:22:01.371567 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:22:01.402955 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1324) Mar 7 01:22:01.448575 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:22:01.449632 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:22:01.463675 systemd[1]: Starting sshkeys.service... Mar 7 01:22:01.496949 containerd[1465]: time="2026-03-07T01:22:01.496732334Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:22:01.527409 containerd[1465]: time="2026-03-07T01:22:01.527348409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.529492840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.529522290Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.529537450Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.529711301Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.529732771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.529902801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.529946271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.530162411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.530177301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.530189631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531050 containerd[1465]: time="2026-03-07T01:22:01.530198581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531449 containerd[1465]: time="2026-03-07T01:22:01.530289951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531449 containerd[1465]: time="2026-03-07T01:22:01.530533901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531449 containerd[1465]: time="2026-03-07T01:22:01.530660731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:22:01.531449 containerd[1465]: time="2026-03-07T01:22:01.530673611Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:22:01.531449 containerd[1465]: time="2026-03-07T01:22:01.530772151Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:22:01.531449 containerd[1465]: time="2026-03-07T01:22:01.530826971Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:22:01.535925 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:22:01.540388 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:22:01.548218 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:22:01.550096 containerd[1465]: time="2026-03-07T01:22:01.550020911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:22:01.550662 containerd[1465]: time="2026-03-07T01:22:01.550293811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:22:01.550662 containerd[1465]: time="2026-03-07T01:22:01.550346491Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:22:01.550662 containerd[1465]: time="2026-03-07T01:22:01.550518741Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:22:01.550662 containerd[1465]: time="2026-03-07T01:22:01.550535491Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:22:01.552930 containerd[1465]: time="2026-03-07T01:22:01.550950311Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:22:01.552930 containerd[1465]: time="2026-03-07T01:22:01.551887642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:22:01.552984 containerd[1465]: time="2026-03-07T01:22:01.552972712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:22:01.553109 containerd[1465]: time="2026-03-07T01:22:01.553010512Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:22:01.553109 containerd[1465]: time="2026-03-07T01:22:01.553034752Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:22:01.553109 containerd[1465]: time="2026-03-07T01:22:01.553049112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553109 containerd[1465]: time="2026-03-07T01:22:01.553061692Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553109 containerd[1465]: time="2026-03-07T01:22:01.553084262Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553109 containerd[1465]: time="2026-03-07T01:22:01.553096842Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553109 containerd[1465]: time="2026-03-07T01:22:01.553110692Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553124132Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553136702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553147832Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553166902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553180802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553203402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553215962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553227612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553240582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553252002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553262 containerd[1465]: time="2026-03-07T01:22:01.553263392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553276682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553291472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553307392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553319462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553330812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553344512Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553364502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553375752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.553450 containerd[1465]: time="2026-03-07T01:22:01.553385582Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:22:01.553593 containerd[1465]: time="2026-03-07T01:22:01.553481122Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:22:01.553593 containerd[1465]: time="2026-03-07T01:22:01.553500822Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:22:01.554281 containerd[1465]: time="2026-03-07T01:22:01.553511382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:22:01.554315 containerd[1465]: time="2026-03-07T01:22:01.554282743Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:22:01.554315 containerd[1465]: time="2026-03-07T01:22:01.554297543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.554315 containerd[1465]: time="2026-03-07T01:22:01.554311553Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:22:01.554368 containerd[1465]: time="2026-03-07T01:22:01.554323933Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:22:01.554368 containerd[1465]: time="2026-03-07T01:22:01.554335033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:22:01.555601 containerd[1465]: time="2026-03-07T01:22:01.555473243Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:22:01.555601 containerd[1465]: time="2026-03-07T01:22:01.555597183Z" level=info msg="Connect containerd service" Mar 7 01:22:01.555833 containerd[1465]: time="2026-03-07T01:22:01.555806904Z" level=info msg="using legacy CRI server" Mar 7 01:22:01.555833 containerd[1465]: time="2026-03-07T01:22:01.555827154Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:22:01.556185 containerd[1465]: time="2026-03-07T01:22:01.556156294Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:22:01.557565 containerd[1465]: time="2026-03-07T01:22:01.557533524Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:22:01.557859 containerd[1465]: time="2026-03-07T01:22:01.557832805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:22:01.557934 containerd[1465]: time="2026-03-07T01:22:01.557895095Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:22:01.558843 containerd[1465]: time="2026-03-07T01:22:01.558453885Z" level=info msg="Start subscribing containerd event" Mar 7 01:22:01.559781 containerd[1465]: time="2026-03-07T01:22:01.559747016Z" level=info msg="Start recovering state" Mar 7 01:22:01.560929 containerd[1465]: time="2026-03-07T01:22:01.560292446Z" level=info msg="Start event monitor" Mar 7 01:22:01.560929 containerd[1465]: time="2026-03-07T01:22:01.560588906Z" level=info msg="Start snapshots syncer" Mar 7 01:22:01.560929 containerd[1465]: time="2026-03-07T01:22:01.560603256Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:22:01.560929 containerd[1465]: time="2026-03-07T01:22:01.560785426Z" level=info msg="Start streaming server" Mar 7 01:22:01.562167 containerd[1465]: time="2026-03-07T01:22:01.561114506Z" level=info msg="containerd successfully booted in 0.065640s" Mar 7 01:22:01.561183 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:22:01.592690 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:22:01.598946 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 7 01:22:01.613173 extend-filesystems[1480]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 01:22:01.613173 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 7 01:22:01.613173 extend-filesystems[1480]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 7 01:22:01.622729 extend-filesystems[1444]: Resized filesystem in /dev/sda9 Mar 7 01:22:01.614460 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:22:01.615351 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:22:01.632391 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:22:01.635554 coreos-metadata[1517]: Mar 07 01:22:01.635 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:22:01.643240 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:22:01.651974 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:22:01.652215 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:22:01.660850 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:22:01.695961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:22:01.703244 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:22:01.711436 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:22:01.713159 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:22:01.943964 tar[1457]: linux-amd64/README.md Mar 7 01:22:01.956239 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:22:02.159941 coreos-metadata[1441]: Mar 07 01:22:02.159 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 7 01:22:02.237292 systemd-networkd[1370]: eth0: Gained IPv6LL Mar 7 01:22:02.237989 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:02.651471 coreos-metadata[1517]: Mar 07 01:22:02.651 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 7 01:22:02.740022 systemd-networkd[1370]: eth0: DHCPv4 address 172.232.28.122/24, gateway 172.232.28.1 acquired from 23.194.118.58 Mar 7 01:22:02.740146 dbus-daemon[1442]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1370 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:22:02.742520 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:02.744225 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:02.745440 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:02.752420 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:22:02.754479 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:22:02.758312 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:22:02.767209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:02.772015 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:22:02.816686 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:22:02.830237 dbus-daemon[1442]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:22:02.830514 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:22:02.831017 dbus-daemon[1442]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1545 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:22:02.840258 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:22:02.855179 polkitd[1557]: Started polkitd version 121 Mar 7 01:22:02.859523 polkitd[1557]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:22:02.859580 polkitd[1557]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:22:02.862000 polkitd[1557]: Finished loading, compiling and executing 2 rules Mar 7 01:22:02.863013 dbus-daemon[1442]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:22:02.863209 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:22:02.864590 polkitd[1557]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:22:02.874108 systemd-hostnamed[1545]: Hostname set to <172-232-28-122> (transient) Mar 7 01:22:02.874612 systemd-resolved[1372]: System hostname changed to '172-232-28-122'. Mar 7 01:22:03.667930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:03.681314 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:22:04.030082 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:04.163296 kubelet[1571]: E0307 01:22:04.163136 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:22:04.167191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:22:04.167394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:22:04.177034 coreos-metadata[1441]: Mar 07 01:22:04.176 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Mar 7 01:22:04.272443 coreos-metadata[1441]: Mar 07 01:22:04.272 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 7 01:22:04.459200 coreos-metadata[1441]: Mar 07 01:22:04.458 INFO Fetch successful Mar 7 01:22:04.459200 coreos-metadata[1441]: Mar 07 01:22:04.459 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 7 01:22:04.662768 coreos-metadata[1517]: Mar 07 01:22:04.662 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Mar 7 01:22:04.715043 coreos-metadata[1441]: Mar 07 01:22:04.714 INFO Fetch successful Mar 7 01:22:04.755087 coreos-metadata[1517]: Mar 07 01:22:04.754 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 7 01:22:04.807792 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:22:04.809845 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:22:04.888697 coreos-metadata[1517]: Mar 07 01:22:04.888 INFO Fetch successful Mar 7 01:22:04.907884 update-ssh-keys[1601]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:22:04.908380 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:22:04.912117 systemd[1]: Finished sshkeys.service. Mar 7 01:22:04.914941 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:22:04.916960 systemd[1]: Startup finished in 1.016s (kernel) + 8.437s (initrd) + 6.806s (userspace) = 16.261s. Mar 7 01:22:05.091315 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:22:05.096119 systemd[1]: Started sshd@0-172.232.28.122:22-68.220.241.50:59646.service - OpenSSH per-connection server daemon (68.220.241.50:59646). Mar 7 01:22:05.249000 sshd[1610]: Accepted publickey for core from 68.220.241.50 port 59646 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:22:05.251540 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:05.260825 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:22:05.269108 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:22:05.272640 systemd-logind[1450]: New session 1 of user core. Mar 7 01:22:05.283194 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:22:05.289396 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:22:05.294527 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:22:05.408917 systemd[1614]: Queued start job for default target default.target. Mar 7 01:22:05.421200 systemd[1614]: Created slice app.slice - User Application Slice. Mar 7 01:22:05.421229 systemd[1614]: Reached target paths.target - Paths. Mar 7 01:22:05.421243 systemd[1614]: Reached target timers.target - Timers. Mar 7 01:22:05.422740 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:22:05.436384 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:22:05.436513 systemd[1614]: Reached target sockets.target - Sockets. Mar 7 01:22:05.436530 systemd[1614]: Reached target basic.target - Basic System. Mar 7 01:22:05.436574 systemd[1614]: Reached target default.target - Main User Target. Mar 7 01:22:05.436613 systemd[1614]: Startup finished in 131ms. Mar 7 01:22:05.436905 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:22:05.444043 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:22:05.582443 systemd[1]: Started sshd@1-172.232.28.122:22-68.220.241.50:59658.service - OpenSSH per-connection server daemon (68.220.241.50:59658). Mar 7 01:22:05.745094 sshd[1625]: Accepted publickey for core from 68.220.241.50 port 59658 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:22:05.747336 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:05.752472 systemd-logind[1450]: New session 2 of user core. Mar 7 01:22:05.759044 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:22:05.882619 sshd[1625]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:05.886564 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:22:05.887354 systemd[1]: sshd@1-172.232.28.122:22-68.220.241.50:59658.service: Deactivated successfully. Mar 7 01:22:05.889420 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:22:05.890326 systemd-logind[1450]: Removed session 2. Mar 7 01:22:05.915021 systemd[1]: Started sshd@2-172.232.28.122:22-68.220.241.50:59670.service - OpenSSH per-connection server daemon (68.220.241.50:59670). Mar 7 01:22:06.092533 sshd[1632]: Accepted publickey for core from 68.220.241.50 port 59670 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:22:06.094640 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:06.099637 systemd-logind[1450]: New session 3 of user core. Mar 7 01:22:06.107044 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:22:06.235737 sshd[1632]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:06.240596 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:22:06.241936 systemd[1]: sshd@2-172.232.28.122:22-68.220.241.50:59670.service: Deactivated successfully. Mar 7 01:22:06.249646 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:22:06.251080 systemd-logind[1450]: Removed session 3. Mar 7 01:22:06.284204 systemd[1]: Started sshd@3-172.232.28.122:22-68.220.241.50:59686.service - OpenSSH per-connection server daemon (68.220.241.50:59686). Mar 7 01:22:06.441672 sshd[1639]: Accepted publickey for core from 68.220.241.50 port 59686 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:22:06.443501 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:06.449789 systemd-logind[1450]: New session 4 of user core. Mar 7 01:22:06.459093 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:22:06.581903 sshd[1639]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:06.585980 systemd[1]: sshd@3-172.232.28.122:22-68.220.241.50:59686.service: Deactivated successfully. Mar 7 01:22:06.586060 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:22:06.588998 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:22:06.591979 systemd-logind[1450]: Removed session 4. Mar 7 01:22:06.610885 systemd[1]: Started sshd@4-172.232.28.122:22-68.220.241.50:59690.service - OpenSSH per-connection server daemon (68.220.241.50:59690). Mar 7 01:22:06.764208 sshd[1646]: Accepted publickey for core from 68.220.241.50 port 59690 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:22:06.764816 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:06.770500 systemd-logind[1450]: New session 5 of user core. Mar 7 01:22:06.777045 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:22:06.880250 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:22:06.880611 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:22:06.898449 sudo[1649]: pam_unix(sudo:session): session closed for user root Mar 7 01:22:06.919980 sshd[1646]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:06.923083 systemd[1]: sshd@4-172.232.28.122:22-68.220.241.50:59690.service: Deactivated successfully. Mar 7 01:22:06.925370 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:22:06.926531 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:22:06.927669 systemd-logind[1450]: Removed session 5. Mar 7 01:22:06.949633 systemd[1]: Started sshd@5-172.232.28.122:22-68.220.241.50:59696.service - OpenSSH per-connection server daemon (68.220.241.50:59696). Mar 7 01:22:07.102032 sshd[1654]: Accepted publickey for core from 68.220.241.50 port 59696 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:22:07.103393 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:07.110373 systemd-logind[1450]: New session 6 of user core. Mar 7 01:22:07.116123 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:22:07.213671 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:22:07.214069 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:22:07.219369 sudo[1658]: pam_unix(sudo:session): session closed for user root Mar 7 01:22:07.228510 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:22:07.229121 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:22:07.251129 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:22:07.254682 auditctl[1661]: No rules Mar 7 01:22:07.255228 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:22:07.255478 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:22:07.262162 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:22:07.290114 augenrules[1679]: No rules Mar 7 01:22:07.291963 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:22:07.293193 sudo[1657]: pam_unix(sudo:session): session closed for user root Mar 7 01:22:07.315022 sshd[1654]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:07.318740 systemd[1]: sshd@5-172.232.28.122:22-68.220.241.50:59696.service: Deactivated successfully. Mar 7 01:22:07.320486 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:22:07.321024 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:22:07.321805 systemd-logind[1450]: Removed session 6. Mar 7 01:22:07.344340 systemd[1]: Started sshd@6-172.232.28.122:22-68.220.241.50:59702.service - OpenSSH per-connection server daemon (68.220.241.50:59702). Mar 7 01:22:07.509120 sshd[1687]: Accepted publickey for core from 68.220.241.50 port 59702 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:22:07.509991 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:07.515546 systemd-logind[1450]: New session 7 of user core. Mar 7 01:22:07.525101 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:22:07.621870 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:22:07.622261 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:22:07.909377 (dockerd)[1706]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:22:07.909956 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:22:08.194150 dockerd[1706]: time="2026-03-07T01:22:08.193248000Z" level=info msg="Starting up" Mar 7 01:22:08.288245 dockerd[1706]: time="2026-03-07T01:22:08.288207567Z" level=info msg="Loading containers: start." Mar 7 01:22:08.392937 kernel: Initializing XFRM netlink socket Mar 7 01:22:08.414833 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:08.418107 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:08.429624 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:08.471193 systemd-networkd[1370]: docker0: Link UP Mar 7 01:22:08.472198 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:22:08.485267 dockerd[1706]: time="2026-03-07T01:22:08.485228326Z" level=info msg="Loading containers: done." Mar 7 01:22:08.500289 dockerd[1706]: time="2026-03-07T01:22:08.500250713Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:22:08.500629 dockerd[1706]: time="2026-03-07T01:22:08.500612404Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:22:08.500745 dockerd[1706]: time="2026-03-07T01:22:08.500728534Z" level=info msg="Daemon has completed initialization" Mar 7 01:22:08.531044 dockerd[1706]: time="2026-03-07T01:22:08.530891969Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:22:08.531332 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:22:08.981342 containerd[1465]: time="2026-03-07T01:22:08.980901704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 01:22:09.543392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805448007.mount: Deactivated successfully. Mar 7 01:22:10.606018 containerd[1465]: time="2026-03-07T01:22:10.605938586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:10.607428 containerd[1465]: time="2026-03-07T01:22:10.607388366Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696473" Mar 7 01:22:10.608957 containerd[1465]: time="2026-03-07T01:22:10.607734306Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:10.610966 containerd[1465]: time="2026-03-07T01:22:10.610577988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:10.611924 containerd[1465]: time="2026-03-07T01:22:10.611879418Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 1.630910084s" Mar 7 01:22:10.612007 containerd[1465]: time="2026-03-07T01:22:10.611991039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 01:22:10.613153 containerd[1465]: time="2026-03-07T01:22:10.613115749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 01:22:11.839098 containerd[1465]: time="2026-03-07T01:22:11.839040242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:11.840058 containerd[1465]: time="2026-03-07T01:22:11.839926622Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450706" Mar 7 01:22:11.840830 containerd[1465]: time="2026-03-07T01:22:11.840494452Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:11.843048 containerd[1465]: time="2026-03-07T01:22:11.843009114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:11.844577 containerd[1465]: time="2026-03-07T01:22:11.843961764Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.230814505s" Mar 7 01:22:11.844577 containerd[1465]: time="2026-03-07T01:22:11.843989244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 01:22:11.845047 containerd[1465]: time="2026-03-07T01:22:11.845020535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 01:22:12.822162 containerd[1465]: time="2026-03-07T01:22:12.822112923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:12.823936 containerd[1465]: time="2026-03-07T01:22:12.822760023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548435" Mar 7 01:22:12.824083 containerd[1465]: time="2026-03-07T01:22:12.824058054Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:12.826939 containerd[1465]: time="2026-03-07T01:22:12.826895465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:12.828144 containerd[1465]: time="2026-03-07T01:22:12.828114326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 983.062041ms" Mar 7 01:22:12.828242 containerd[1465]: time="2026-03-07T01:22:12.828226316Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 01:22:12.830469 containerd[1465]: time="2026-03-07T01:22:12.830435497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 01:22:13.814333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928611575.mount: Deactivated successfully. Mar 7 01:22:14.050762 containerd[1465]: time="2026-03-07T01:22:14.050703097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:14.051631 containerd[1465]: time="2026-03-07T01:22:14.051423337Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685318" Mar 7 01:22:14.052281 containerd[1465]: time="2026-03-07T01:22:14.052059577Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:14.054176 containerd[1465]: time="2026-03-07T01:22:14.054138758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:14.054934 containerd[1465]: time="2026-03-07T01:22:14.054886299Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.224411852s" Mar 7 01:22:14.054991 containerd[1465]: time="2026-03-07T01:22:14.054948759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 01:22:14.055502 containerd[1465]: time="2026-03-07T01:22:14.055468769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 01:22:14.417601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:22:14.423136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:14.579059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:14.589076 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:22:14.596687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275373434.mount: Deactivated successfully. Mar 7 01:22:14.644822 kubelet[1930]: E0307 01:22:14.644774 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:22:14.650359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:22:14.650557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:22:15.393699 containerd[1465]: time="2026-03-07T01:22:15.393351968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:15.394491 containerd[1465]: time="2026-03-07T01:22:15.394306188Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556548" Mar 7 01:22:15.395940 containerd[1465]: time="2026-03-07T01:22:15.395012418Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:15.398252 containerd[1465]: time="2026-03-07T01:22:15.397597810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:15.398888 containerd[1465]: time="2026-03-07T01:22:15.398858980Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.343359381s" Mar 7 01:22:15.398964 containerd[1465]: time="2026-03-07T01:22:15.398890130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 01:22:15.400488 containerd[1465]: time="2026-03-07T01:22:15.400459481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:22:15.864084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337558740.mount: Deactivated successfully. Mar 7 01:22:15.871133 containerd[1465]: time="2026-03-07T01:22:15.871101396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:15.872928 containerd[1465]: time="2026-03-07T01:22:15.872035037Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Mar 7 01:22:15.872928 containerd[1465]: time="2026-03-07T01:22:15.872077147Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:15.874175 containerd[1465]: time="2026-03-07T01:22:15.874140818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:15.875003 containerd[1465]: time="2026-03-07T01:22:15.874817358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 474.248217ms" Mar 7 01:22:15.875003 containerd[1465]: time="2026-03-07T01:22:15.874847758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:22:15.875295 containerd[1465]: time="2026-03-07T01:22:15.875275388Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 01:22:16.372090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781402209.mount: Deactivated successfully. Mar 7 01:22:16.984190 containerd[1465]: time="2026-03-07T01:22:16.984098502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:16.985418 containerd[1465]: time="2026-03-07T01:22:16.985169703Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630328" Mar 7 01:22:16.987881 containerd[1465]: time="2026-03-07T01:22:16.987109674Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:16.991588 containerd[1465]: time="2026-03-07T01:22:16.991549316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:16.992508 containerd[1465]: time="2026-03-07T01:22:16.992482497Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.117180599s" Mar 7 01:22:16.992589 containerd[1465]: time="2026-03-07T01:22:16.992572077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 01:22:17.912213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:17.920106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:17.959346 systemd[1]: Reloading requested from client PID 2082 ('systemctl') (unit session-7.scope)... Mar 7 01:22:17.959359 systemd[1]: Reloading... Mar 7 01:22:18.109043 zram_generator::config[2131]: No configuration found. Mar 7 01:22:18.210802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:22:18.282855 systemd[1]: Reloading finished in 322 ms. Mar 7 01:22:18.338304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:18.343057 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:18.343996 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:22:18.344243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:18.351375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:18.497615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:18.506233 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:22:18.540122 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:22:18.778596 kubelet[2178]: I0307 01:22:18.778493 2178 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:22:18.779938 kubelet[2178]: I0307 01:22:18.778836 2178 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:22:18.779938 kubelet[2178]: I0307 01:22:18.778857 2178 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:22:18.779938 kubelet[2178]: I0307 01:22:18.778863 2178 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:22:18.779938 kubelet[2178]: I0307 01:22:18.779231 2178 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:22:18.784636 kubelet[2178]: E0307 01:22:18.784616 2178 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.28.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.28.122:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:22:18.784840 kubelet[2178]: I0307 01:22:18.784815 2178 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:22:18.788494 kubelet[2178]: E0307 01:22:18.788459 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:22:18.788545 kubelet[2178]: I0307 01:22:18.788511 2178 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:22:18.791811 kubelet[2178]: I0307 01:22:18.791797 2178 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:22:18.793139 kubelet[2178]: I0307 01:22:18.793106 2178 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:22:18.793287 kubelet[2178]: I0307 01:22:18.793130 2178 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-28-122","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:22:18.793287 kubelet[2178]: I0307 01:22:18.793284 2178 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:22:18.793396 kubelet[2178]: I0307 01:22:18.793293 2178 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:22:18.793396 kubelet[2178]: I0307 01:22:18.793375 2178 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:22:18.794282 kubelet[2178]: I0307 01:22:18.794266 2178 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:22:18.794425 kubelet[2178]: I0307 01:22:18.794413 2178 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:22:18.794455 kubelet[2178]: I0307 01:22:18.794426 2178 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:22:18.794455 kubelet[2178]: I0307 01:22:18.794448 2178 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:22:18.794455 kubelet[2178]: I0307 01:22:18.794456 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:22:18.796652 kubelet[2178]: I0307 01:22:18.796608 2178 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:22:18.799056 kubelet[2178]: I0307 01:22:18.798810 2178 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:22:18.799056 kubelet[2178]: I0307 01:22:18.798841 2178 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:22:18.799056 kubelet[2178]: W0307 01:22:18.798898 2178 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:22:18.801832 kubelet[2178]: I0307 01:22:18.801814 2178 server.go:1257] "Started kubelet" Mar 7 01:22:18.802148 kubelet[2178]: I0307 01:22:18.802121 2178 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:22:18.803079 kubelet[2178]: I0307 01:22:18.802836 2178 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:22:18.808397 kubelet[2178]: I0307 01:22:18.808361 2178 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:22:18.808847 kubelet[2178]: I0307 01:22:18.808467 2178 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:22:18.808847 kubelet[2178]: I0307 01:22:18.808669 2178 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:22:18.812142 kubelet[2178]: I0307 01:22:18.812108 2178 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:22:18.813884 kubelet[2178]: I0307 01:22:18.812944 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:22:18.813884 kubelet[2178]: E0307 01:22:18.811083 2178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.28.122:6443/api/v1/namespaces/default/events\": dial tcp 172.232.28.122:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-28-122.189a6a88ba8fc5c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-28-122,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-28-122,},FirstTimestamp:2026-03-07 01:22:18.801792451 +0000 UTC m=+0.291689027,LastTimestamp:2026-03-07 01:22:18.801792451 +0000 UTC m=+0.291689027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-28-122,}" Mar 7 01:22:18.815326 kubelet[2178]: I0307 01:22:18.815306 2178 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:22:18.815462 kubelet[2178]: E0307 01:22:18.815442 2178 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-232-28-122\" not found" Mar 7 01:22:18.815658 kubelet[2178]: I0307 01:22:18.815639 2178 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:22:18.815707 kubelet[2178]: I0307 01:22:18.815690 2178 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:22:18.816570 kubelet[2178]: E0307 01:22:18.816544 2178 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.28.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-28-122?timeout=10s\": dial tcp 172.232.28.122:6443: connect: connection refused" interval="200ms" Mar 7 01:22:18.817881 kubelet[2178]: I0307 01:22:18.817859 2178 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:22:18.817986 kubelet[2178]: I0307 01:22:18.817939 2178 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:22:18.818197 kubelet[2178]: E0307 01:22:18.818171 2178 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:22:18.819563 kubelet[2178]: I0307 01:22:18.818982 2178 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:22:18.834626 kubelet[2178]: I0307 01:22:18.834516 2178 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:22:18.835694 kubelet[2178]: I0307 01:22:18.835671 2178 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:22:18.835694 kubelet[2178]: I0307 01:22:18.835690 2178 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:22:18.835762 kubelet[2178]: I0307 01:22:18.835707 2178 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:22:18.835784 kubelet[2178]: E0307 01:22:18.835757 2178 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:22:18.852145 kubelet[2178]: I0307 01:22:18.852127 2178 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:22:18.852145 kubelet[2178]: I0307 01:22:18.852140 2178 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:22:18.852489 kubelet[2178]: I0307 01:22:18.852155 2178 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:22:18.854224 kubelet[2178]: I0307 01:22:18.854205 2178 policy_none.go:50] "Start" Mar 7 01:22:18.854224 kubelet[2178]: I0307 01:22:18.854223 2178 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:22:18.854301 kubelet[2178]: I0307 01:22:18.854234 2178 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:22:18.854947 kubelet[2178]: I0307 01:22:18.854935 2178 policy_none.go:44] "Start" Mar 7 01:22:18.859411 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:22:18.871102 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:22:18.874330 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:22:18.881129 kubelet[2178]: E0307 01:22:18.881101 2178 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:22:18.881617 kubelet[2178]: I0307 01:22:18.881557 2178 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:22:18.881617 kubelet[2178]: I0307 01:22:18.881573 2178 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:22:18.883523 kubelet[2178]: I0307 01:22:18.883072 2178 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:22:18.884746 kubelet[2178]: E0307 01:22:18.884722 2178 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:22:18.885067 kubelet[2178]: E0307 01:22:18.885005 2178 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-28-122\" not found" Mar 7 01:22:18.948595 systemd[1]: Created slice kubepods-burstable-podfaad38db973e48e215c18cdbdb8c60c6.slice - libcontainer container kubepods-burstable-podfaad38db973e48e215c18cdbdb8c60c6.slice. Mar 7 01:22:18.956824 kubelet[2178]: E0307 01:22:18.956640 2178 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-28-122\" not found" node="172-232-28-122" Mar 7 01:22:18.960283 systemd[1]: Created slice kubepods-burstable-podcc553555909bddb46957cf357d375f6e.slice - libcontainer container kubepods-burstable-podcc553555909bddb46957cf357d375f6e.slice. Mar 7 01:22:18.973268 kubelet[2178]: E0307 01:22:18.973254 2178 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-28-122\" not found" node="172-232-28-122" Mar 7 01:22:18.976554 systemd[1]: Created slice kubepods-burstable-podd58b72675befd6f1c99466768b464de8.slice - libcontainer container kubepods-burstable-podd58b72675befd6f1c99466768b464de8.slice. Mar 7 01:22:18.978268 kubelet[2178]: E0307 01:22:18.978251 2178 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-28-122\" not found" node="172-232-28-122" Mar 7 01:22:18.983442 kubelet[2178]: I0307 01:22:18.983414 2178 kubelet_node_status.go:74] "Attempting to register node" node="172-232-28-122" Mar 7 01:22:18.983723 kubelet[2178]: E0307 01:22:18.983705 2178 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.232.28.122:6443/api/v1/nodes\": dial tcp 172.232.28.122:6443: connect: connection refused" node="172-232-28-122" Mar 7 01:22:19.017810 kubelet[2178]: I0307 01:22:19.017259 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-k8s-certs\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:19.017810 kubelet[2178]: I0307 01:22:19.017287 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:19.017810 kubelet[2178]: I0307 01:22:19.017305 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d58b72675befd6f1c99466768b464de8-kubeconfig\") pod \"kube-scheduler-172-232-28-122\" (UID: \"d58b72675befd6f1c99466768b464de8\") " pod="kube-system/kube-scheduler-172-232-28-122" Mar 7 01:22:19.017810 kubelet[2178]: I0307 01:22:19.017320 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/faad38db973e48e215c18cdbdb8c60c6-ca-certs\") pod \"kube-apiserver-172-232-28-122\" (UID: \"faad38db973e48e215c18cdbdb8c60c6\") " pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:19.017810 kubelet[2178]: I0307 01:22:19.017361 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/faad38db973e48e215c18cdbdb8c60c6-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-28-122\" (UID: \"faad38db973e48e215c18cdbdb8c60c6\") " pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:19.018151 kubelet[2178]: I0307 01:22:19.017377 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-ca-certs\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:19.018151 kubelet[2178]: I0307 01:22:19.017402 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-flexvolume-dir\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:19.018151 kubelet[2178]: I0307 01:22:19.017417 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-kubeconfig\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:19.018151 kubelet[2178]: E0307 01:22:19.017426 2178 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.28.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-28-122?timeout=10s\": dial tcp 172.232.28.122:6443: connect: connection refused" interval="400ms" Mar 7 01:22:19.018151 kubelet[2178]: I0307 01:22:19.017434 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/faad38db973e48e215c18cdbdb8c60c6-k8s-certs\") pod \"kube-apiserver-172-232-28-122\" (UID: \"faad38db973e48e215c18cdbdb8c60c6\") " pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:19.186203 kubelet[2178]: I0307 01:22:19.186104 2178 kubelet_node_status.go:74] "Attempting to register node" node="172-232-28-122" Mar 7 01:22:19.186399 kubelet[2178]: E0307 01:22:19.186374 2178 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.232.28.122:6443/api/v1/nodes\": dial tcp 172.232.28.122:6443: connect: connection refused" node="172-232-28-122" Mar 7 01:22:19.259374 kubelet[2178]: E0307 01:22:19.259347 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:19.260226 containerd[1465]: time="2026-03-07T01:22:19.260195790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-28-122,Uid:faad38db973e48e215c18cdbdb8c60c6,Namespace:kube-system,Attempt:0,}" Mar 7 01:22:19.275101 kubelet[2178]: E0307 01:22:19.275081 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:19.275441 containerd[1465]: time="2026-03-07T01:22:19.275406717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-28-122,Uid:cc553555909bddb46957cf357d375f6e,Namespace:kube-system,Attempt:0,}" Mar 7 01:22:19.279865 kubelet[2178]: E0307 01:22:19.279714 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:19.280051 containerd[1465]: time="2026-03-07T01:22:19.279976110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-28-122,Uid:d58b72675befd6f1c99466768b464de8,Namespace:kube-system,Attempt:0,}" Mar 7 01:22:19.419217 kubelet[2178]: E0307 01:22:19.419159 2178 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.28.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-28-122?timeout=10s\": dial tcp 172.232.28.122:6443: connect: connection refused" interval="800ms" Mar 7 01:22:19.589218 kubelet[2178]: I0307 01:22:19.589096 2178 kubelet_node_status.go:74] "Attempting to register node" node="172-232-28-122" Mar 7 01:22:19.590121 kubelet[2178]: E0307 01:22:19.590081 2178 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.232.28.122:6443/api/v1/nodes\": dial tcp 172.232.28.122:6443: connect: connection refused" node="172-232-28-122" Mar 7 01:22:19.738187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943164031.mount: Deactivated successfully. Mar 7 01:22:19.740208 containerd[1465]: time="2026-03-07T01:22:19.740164379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:22:19.742173 containerd[1465]: time="2026-03-07T01:22:19.742125450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Mar 7 01:22:19.742534 containerd[1465]: time="2026-03-07T01:22:19.742479271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:22:19.743284 containerd[1465]: time="2026-03-07T01:22:19.743247061Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:22:19.744701 containerd[1465]: time="2026-03-07T01:22:19.744349702Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:22:19.744701 containerd[1465]: time="2026-03-07T01:22:19.744636992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:22:19.745123 containerd[1465]: time="2026-03-07T01:22:19.745081382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:22:19.749269 containerd[1465]: time="2026-03-07T01:22:19.749239384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:22:19.751934 containerd[1465]: time="2026-03-07T01:22:19.750217655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.756198ms" Mar 7 01:22:19.751934 containerd[1465]: time="2026-03-07T01:22:19.751384655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 491.116665ms" Mar 7 01:22:19.752632 containerd[1465]: time="2026-03-07T01:22:19.752572836Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.553786ms" Mar 7 01:22:19.859458 containerd[1465]: time="2026-03-07T01:22:19.859129779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:19.859458 containerd[1465]: time="2026-03-07T01:22:19.859176379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:19.859458 containerd[1465]: time="2026-03-07T01:22:19.859190329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:19.859458 containerd[1465]: time="2026-03-07T01:22:19.859289649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:19.862389 containerd[1465]: time="2026-03-07T01:22:19.862147070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:19.862389 containerd[1465]: time="2026-03-07T01:22:19.862215620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:19.862389 containerd[1465]: time="2026-03-07T01:22:19.862229730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:19.862389 containerd[1465]: time="2026-03-07T01:22:19.862299321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:19.870033 containerd[1465]: time="2026-03-07T01:22:19.869967614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:19.870097 containerd[1465]: time="2026-03-07T01:22:19.870053434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:19.870125 containerd[1465]: time="2026-03-07T01:22:19.870092164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:19.871067 containerd[1465]: time="2026-03-07T01:22:19.871018985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:19.894064 systemd[1]: Started cri-containerd-2f72979d3fc0bc00b91f3bc85e3ad891444ddc0979adaa9087d860e42b577c8b.scope - libcontainer container 2f72979d3fc0bc00b91f3bc85e3ad891444ddc0979adaa9087d860e42b577c8b. Mar 7 01:22:19.909198 systemd[1]: Started cri-containerd-e8598a5ede38b4f241a709d0e2f4ed14af9c2b84bb03adfa171ef39c6a853ae8.scope - libcontainer container e8598a5ede38b4f241a709d0e2f4ed14af9c2b84bb03adfa171ef39c6a853ae8. Mar 7 01:22:19.917148 systemd[1]: Started cri-containerd-300b3a6c2e73f3473b6fae664461a00deed6593aadb9ab0c292ecf96a1c5400a.scope - libcontainer container 300b3a6c2e73f3473b6fae664461a00deed6593aadb9ab0c292ecf96a1c5400a. Mar 7 01:22:19.965145 containerd[1465]: time="2026-03-07T01:22:19.964739932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-28-122,Uid:faad38db973e48e215c18cdbdb8c60c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8598a5ede38b4f241a709d0e2f4ed14af9c2b84bb03adfa171ef39c6a853ae8\"" Mar 7 01:22:19.967578 kubelet[2178]: E0307 01:22:19.967540 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:19.976136 containerd[1465]: time="2026-03-07T01:22:19.975189227Z" level=info msg="CreateContainer within sandbox \"e8598a5ede38b4f241a709d0e2f4ed14af9c2b84bb03adfa171ef39c6a853ae8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:22:19.990944 containerd[1465]: time="2026-03-07T01:22:19.990895575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-28-122,Uid:d58b72675befd6f1c99466768b464de8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f72979d3fc0bc00b91f3bc85e3ad891444ddc0979adaa9087d860e42b577c8b\"" Mar 7 01:22:19.992427 kubelet[2178]: E0307 01:22:19.992405 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:19.995504 containerd[1465]: time="2026-03-07T01:22:19.995482047Z" level=info msg="CreateContainer within sandbox \"2f72979d3fc0bc00b91f3bc85e3ad891444ddc0979adaa9087d860e42b577c8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:22:20.000776 containerd[1465]: time="2026-03-07T01:22:20.000730800Z" level=info msg="CreateContainer within sandbox \"e8598a5ede38b4f241a709d0e2f4ed14af9c2b84bb03adfa171ef39c6a853ae8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"864366e41d69c28da0da7f30a15b1548d3ad5cad1151e691b5d19ce681bc92c6\"" Mar 7 01:22:20.001498 containerd[1465]: time="2026-03-07T01:22:20.001479360Z" level=info msg="StartContainer for \"864366e41d69c28da0da7f30a15b1548d3ad5cad1151e691b5d19ce681bc92c6\"" Mar 7 01:22:20.004649 containerd[1465]: time="2026-03-07T01:22:20.004612892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-28-122,Uid:cc553555909bddb46957cf357d375f6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"300b3a6c2e73f3473b6fae664461a00deed6593aadb9ab0c292ecf96a1c5400a\"" Mar 7 01:22:20.005615 kubelet[2178]: E0307 01:22:20.005598 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:20.006604 containerd[1465]: time="2026-03-07T01:22:20.006583033Z" level=info msg="CreateContainer within sandbox \"2f72979d3fc0bc00b91f3bc85e3ad891444ddc0979adaa9087d860e42b577c8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc653416c3389bbe4970db205fb8ed17a2ab0320f94355a6e997f5e6e83a4f89\"" Mar 7 01:22:20.008281 containerd[1465]: time="2026-03-07T01:22:20.007197083Z" level=info msg="StartContainer for \"dc653416c3389bbe4970db205fb8ed17a2ab0320f94355a6e997f5e6e83a4f89\"" Mar 7 01:22:20.010614 containerd[1465]: time="2026-03-07T01:22:20.010595055Z" level=info msg="CreateContainer within sandbox \"300b3a6c2e73f3473b6fae664461a00deed6593aadb9ab0c292ecf96a1c5400a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:22:20.022741 containerd[1465]: time="2026-03-07T01:22:20.022709071Z" level=info msg="CreateContainer within sandbox \"300b3a6c2e73f3473b6fae664461a00deed6593aadb9ab0c292ecf96a1c5400a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d04824a93aac5bd76716490a5fb287f9327170eadd6cc98e3a1e47ae29374f04\"" Mar 7 01:22:20.023949 containerd[1465]: time="2026-03-07T01:22:20.023899961Z" level=info msg="StartContainer for \"d04824a93aac5bd76716490a5fb287f9327170eadd6cc98e3a1e47ae29374f04\"" Mar 7 01:22:20.050054 systemd[1]: Started cri-containerd-864366e41d69c28da0da7f30a15b1548d3ad5cad1151e691b5d19ce681bc92c6.scope - libcontainer container 864366e41d69c28da0da7f30a15b1548d3ad5cad1151e691b5d19ce681bc92c6. Mar 7 01:22:20.052232 systemd[1]: Started cri-containerd-dc653416c3389bbe4970db205fb8ed17a2ab0320f94355a6e997f5e6e83a4f89.scope - libcontainer container dc653416c3389bbe4970db205fb8ed17a2ab0320f94355a6e997f5e6e83a4f89. Mar 7 01:22:20.056846 systemd[1]: Started cri-containerd-d04824a93aac5bd76716490a5fb287f9327170eadd6cc98e3a1e47ae29374f04.scope - libcontainer container d04824a93aac5bd76716490a5fb287f9327170eadd6cc98e3a1e47ae29374f04. Mar 7 01:22:20.104186 containerd[1465]: time="2026-03-07T01:22:20.104144481Z" level=info msg="StartContainer for \"864366e41d69c28da0da7f30a15b1548d3ad5cad1151e691b5d19ce681bc92c6\" returns successfully" Mar 7 01:22:20.131292 containerd[1465]: time="2026-03-07T01:22:20.131193385Z" level=info msg="StartContainer for \"dc653416c3389bbe4970db205fb8ed17a2ab0320f94355a6e997f5e6e83a4f89\" returns successfully" Mar 7 01:22:20.144154 containerd[1465]: time="2026-03-07T01:22:20.144029001Z" level=info msg="StartContainer for \"d04824a93aac5bd76716490a5fb287f9327170eadd6cc98e3a1e47ae29374f04\" returns successfully" Mar 7 01:22:20.393317 kubelet[2178]: I0307 01:22:20.393153 2178 kubelet_node_status.go:74] "Attempting to register node" node="172-232-28-122" Mar 7 01:22:20.852653 kubelet[2178]: E0307 01:22:20.852548 2178 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-28-122\" not found" node="172-232-28-122" Mar 7 01:22:20.853078 kubelet[2178]: E0307 01:22:20.852657 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:20.857784 kubelet[2178]: E0307 01:22:20.857758 2178 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-28-122\" not found" node="172-232-28-122" Mar 7 01:22:20.858087 kubelet[2178]: E0307 01:22:20.858066 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:20.858827 kubelet[2178]: E0307 01:22:20.858806 2178 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-28-122\" not found" node="172-232-28-122" Mar 7 01:22:20.858934 kubelet[2178]: E0307 01:22:20.858902 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:21.083844 kubelet[2178]: E0307 01:22:21.083804 2178 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-28-122\" not found" node="172-232-28-122" Mar 7 01:22:21.125054 kubelet[2178]: I0307 01:22:21.124930 2178 kubelet_node_status.go:77] "Successfully registered node" node="172-232-28-122" Mar 7 01:22:21.125054 kubelet[2178]: E0307 01:22:21.124962 2178 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"172-232-28-122\": node \"172-232-28-122\" not found" Mar 7 01:22:21.137027 kubelet[2178]: E0307 01:22:21.136991 2178 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-232-28-122\" not found" Mar 7 01:22:21.239100 kubelet[2178]: E0307 01:22:21.239055 2178 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-232-28-122\" not found" Mar 7 01:22:21.340008 kubelet[2178]: E0307 01:22:21.339965 2178 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-232-28-122\" not found" Mar 7 01:22:21.416966 kubelet[2178]: I0307 01:22:21.416765 2178 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:21.428291 kubelet[2178]: E0307 01:22:21.428203 2178 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-28-122\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:21.428291 kubelet[2178]: I0307 01:22:21.428261 2178 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:21.430645 kubelet[2178]: E0307 01:22:21.430610 2178 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-28-122\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:21.430645 kubelet[2178]: I0307 01:22:21.430640 2178 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-28-122" Mar 7 01:22:21.432591 kubelet[2178]: E0307 01:22:21.432562 2178 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-28-122\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-28-122" Mar 7 01:22:21.798258 kubelet[2178]: I0307 01:22:21.797817 2178 apiserver.go:52] "Watching apiserver" Mar 7 01:22:21.815796 kubelet[2178]: I0307 01:22:21.815756 2178 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:22:21.860274 kubelet[2178]: I0307 01:22:21.859213 2178 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:21.860274 kubelet[2178]: I0307 01:22:21.859408 2178 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-28-122" Mar 7 01:22:21.863398 kubelet[2178]: E0307 01:22:21.863091 2178 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-28-122\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:21.863398 kubelet[2178]: E0307 01:22:21.863091 2178 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-28-122\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-28-122" Mar 7 01:22:21.863398 kubelet[2178]: E0307 01:22:21.863312 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:21.863398 kubelet[2178]: E0307 01:22:21.863346 2178 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:22.952251 systemd[1]: Reloading requested from client PID 2457 ('systemctl') (unit session-7.scope)... Mar 7 01:22:22.952268 systemd[1]: Reloading... Mar 7 01:22:23.060965 zram_generator::config[2506]: No configuration found. Mar 7 01:22:23.160271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:22:23.244719 systemd[1]: Reloading finished in 292 ms. Mar 7 01:22:23.296243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:23.309133 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:22:23.309398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:23.322186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:22:23.465253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:22:23.474442 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:22:23.526301 kubelet[2548]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:22:23.537992 kubelet[2548]: I0307 01:22:23.537936 2548 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:22:23.537992 kubelet[2548]: I0307 01:22:23.537973 2548 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:22:23.537992 kubelet[2548]: I0307 01:22:23.537996 2548 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:22:23.538152 kubelet[2548]: I0307 01:22:23.538004 2548 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:22:23.538328 kubelet[2548]: I0307 01:22:23.538301 2548 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:22:23.539390 kubelet[2548]: I0307 01:22:23.539364 2548 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:22:23.541714 kubelet[2548]: I0307 01:22:23.541351 2548 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:22:23.544217 kubelet[2548]: E0307 01:22:23.544190 2548 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:22:23.544320 kubelet[2548]: I0307 01:22:23.544301 2548 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:22:23.549689 kubelet[2548]: I0307 01:22:23.549512 2548 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:22:23.549861 kubelet[2548]: I0307 01:22:23.549830 2548 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:22:23.550531 kubelet[2548]: I0307 01:22:23.549860 2548 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-28-122","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:22:23.550531 kubelet[2548]: I0307 01:22:23.550493 2548 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:22:23.550684 kubelet[2548]: I0307 01:22:23.550540 2548 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:22:23.550684 kubelet[2548]: I0307 01:22:23.550570 2548 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:22:23.550905 kubelet[2548]: I0307 01:22:23.550879 2548 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:22:23.551127 kubelet[2548]: I0307 01:22:23.551102 2548 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:22:23.551186 kubelet[2548]: I0307 01:22:23.551130 2548 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:22:23.551186 kubelet[2548]: I0307 01:22:23.551151 2548 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:22:23.551186 kubelet[2548]: I0307 01:22:23.551163 2548 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:22:23.554771 kubelet[2548]: I0307 01:22:23.553725 2548 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:22:23.556111 kubelet[2548]: I0307 01:22:23.556087 2548 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:22:23.556172 kubelet[2548]: I0307 01:22:23.556131 2548 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:22:23.561229 kubelet[2548]: I0307 01:22:23.561203 2548 server.go:1257] "Started kubelet" Mar 7 01:22:23.567694 kubelet[2548]: I0307 01:22:23.567237 2548 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:22:23.580740 kubelet[2548]: I0307 01:22:23.579719 2548 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:22:23.582227 kubelet[2548]: I0307 01:22:23.582205 2548 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:22:23.587055 kubelet[2548]: I0307 01:22:23.586596 2548 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:22:23.587161 kubelet[2548]: I0307 01:22:23.587138 2548 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:22:23.587532 kubelet[2548]: I0307 01:22:23.587511 2548 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:22:23.587728 kubelet[2548]: I0307 01:22:23.587706 2548 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:22:23.588938 kubelet[2548]: I0307 01:22:23.587928 2548 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:22:23.591020 kubelet[2548]: I0307 01:22:23.590999 2548 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:22:23.591604 kubelet[2548]: I0307 01:22:23.591575 2548 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:22:23.596947 kubelet[2548]: I0307 01:22:23.596263 2548 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:22:23.597032 kubelet[2548]: I0307 01:22:23.597018 2548 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:22:23.597214 kubelet[2548]: I0307 01:22:23.597194 2548 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:22:23.598295 kubelet[2548]: I0307 01:22:23.597707 2548 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:22:23.600024 kubelet[2548]: I0307 01:22:23.599999 2548 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:22:23.600024 kubelet[2548]: I0307 01:22:23.600022 2548 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:22:23.600161 kubelet[2548]: I0307 01:22:23.600051 2548 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:22:23.600299 kubelet[2548]: E0307 01:22:23.600268 2548 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:22:23.629988 kubelet[2548]: E0307 01:22:23.629960 2548 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:22:23.670698 kubelet[2548]: I0307 01:22:23.670672 2548 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:22:23.671100 kubelet[2548]: I0307 01:22:23.671083 2548 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:22:23.671183 kubelet[2548]: I0307 01:22:23.671169 2548 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:22:23.671400 kubelet[2548]: I0307 01:22:23.671379 2548 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 01:22:23.671501 kubelet[2548]: I0307 01:22:23.671470 2548 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 01:22:23.671563 kubelet[2548]: I0307 01:22:23.671552 2548 policy_none.go:50] "Start" Mar 7 01:22:23.671622 kubelet[2548]: I0307 01:22:23.671611 2548 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:22:23.671695 kubelet[2548]: I0307 01:22:23.671682 2548 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:22:23.672103 kubelet[2548]: I0307 01:22:23.672085 2548 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:22:23.672186 kubelet[2548]: I0307 01:22:23.672174 2548 policy_none.go:44] "Start" Mar 7 01:22:23.679129 kubelet[2548]: E0307 01:22:23.678259 2548 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:22:23.679129 kubelet[2548]: I0307 01:22:23.678451 2548 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:22:23.679129 kubelet[2548]: I0307 01:22:23.678463 2548 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:22:23.679129 kubelet[2548]: I0307 01:22:23.678979 2548 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:22:23.682274 kubelet[2548]: E0307 01:22:23.682254 2548 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:22:23.701662 kubelet[2548]: I0307 01:22:23.701643 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-28-122" Mar 7 01:22:23.705324 kubelet[2548]: I0307 01:22:23.704488 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:23.705468 kubelet[2548]: I0307 01:22:23.704865 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:23.785493 kubelet[2548]: I0307 01:22:23.785394 2548 kubelet_node_status.go:74] "Attempting to register node" node="172-232-28-122" Mar 7 01:22:23.792269 kubelet[2548]: I0307 01:22:23.792238 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/faad38db973e48e215c18cdbdb8c60c6-k8s-certs\") pod \"kube-apiserver-172-232-28-122\" (UID: \"faad38db973e48e215c18cdbdb8c60c6\") " pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:23.792399 kubelet[2548]: I0307 01:22:23.792386 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-kubeconfig\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:23.792507 kubelet[2548]: I0307 01:22:23.792493 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d58b72675befd6f1c99466768b464de8-kubeconfig\") pod \"kube-scheduler-172-232-28-122\" (UID: \"d58b72675befd6f1c99466768b464de8\") " pod="kube-system/kube-scheduler-172-232-28-122" Mar 7 01:22:23.792618 kubelet[2548]: I0307 01:22:23.792607 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/faad38db973e48e215c18cdbdb8c60c6-ca-certs\") pod \"kube-apiserver-172-232-28-122\" (UID: \"faad38db973e48e215c18cdbdb8c60c6\") " pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:23.792726 kubelet[2548]: I0307 01:22:23.792711 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/faad38db973e48e215c18cdbdb8c60c6-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-28-122\" (UID: \"faad38db973e48e215c18cdbdb8c60c6\") " pod="kube-system/kube-apiserver-172-232-28-122" Mar 7 01:22:23.792814 kubelet[2548]: I0307 01:22:23.792802 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-ca-certs\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:23.792926 kubelet[2548]: I0307 01:22:23.792887 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-flexvolume-dir\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:23.793040 kubelet[2548]: I0307 01:22:23.792904 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-k8s-certs\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:23.793040 kubelet[2548]: I0307 01:22:23.793002 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc553555909bddb46957cf357d375f6e-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-28-122\" (UID: \"cc553555909bddb46957cf357d375f6e\") " pod="kube-system/kube-controller-manager-172-232-28-122" Mar 7 01:22:23.794653 kubelet[2548]: I0307 01:22:23.794611 2548 kubelet_node_status.go:123] "Node was previously registered" node="172-232-28-122" Mar 7 01:22:23.794766 kubelet[2548]: I0307 01:22:23.794748 2548 kubelet_node_status.go:77] "Successfully registered node" node="172-232-28-122" Mar 7 01:22:24.015161 kubelet[2548]: E0307 01:22:24.014449 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:24.015161 kubelet[2548]: E0307 01:22:24.014934 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:24.015161 kubelet[2548]: E0307 01:22:24.015069 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:24.552303 kubelet[2548]: I0307 01:22:24.552060 2548 apiserver.go:52] "Watching apiserver" Mar 7 01:22:24.593712 kubelet[2548]: I0307 01:22:24.593668 2548 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:22:24.646953 kubelet[2548]: E0307 01:22:24.646549 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:24.647361 kubelet[2548]: E0307 01:22:24.647347 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:24.647617 kubelet[2548]: E0307 01:22:24.647569 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:24.682268 kubelet[2548]: I0307 01:22:24.682212 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-28-122" podStartSLOduration=1.6821995090000001 podStartE2EDuration="1.682199509s" podCreationTimestamp="2026-03-07 01:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:22:24.672449594 +0000 UTC m=+1.192421017" watchObservedRunningTime="2026-03-07 01:22:24.682199509 +0000 UTC m=+1.202170932" Mar 7 01:22:24.702507 kubelet[2548]: I0307 01:22:24.702459 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-28-122" podStartSLOduration=1.7024478090000001 podStartE2EDuration="1.702447809s" podCreationTimestamp="2026-03-07 01:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:22:24.683247789 +0000 UTC m=+1.203219232" watchObservedRunningTime="2026-03-07 01:22:24.702447809 +0000 UTC m=+1.222419232" Mar 7 01:22:24.717165 kubelet[2548]: I0307 01:22:24.717121 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-28-122" podStartSLOduration=1.717110056 podStartE2EDuration="1.717110056s" podCreationTimestamp="2026-03-07 01:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:22:24.703105219 +0000 UTC m=+1.223076642" watchObservedRunningTime="2026-03-07 01:22:24.717110056 +0000 UTC m=+1.237081479" Mar 7 01:22:25.647148 kubelet[2548]: E0307 01:22:25.647117 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:25.647649 kubelet[2548]: E0307 01:22:25.647581 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:25.647800 kubelet[2548]: E0307 01:22:25.647785 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:26.648899 kubelet[2548]: E0307 01:22:26.648859 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:26.649721 kubelet[2548]: E0307 01:22:26.649436 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:27.650609 kubelet[2548]: E0307 01:22:27.650583 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:29.569817 kubelet[2548]: I0307 01:22:29.569771 2548 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:22:29.571152 containerd[1465]: time="2026-03-07T01:22:29.571088612Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:22:29.571710 kubelet[2548]: I0307 01:22:29.571303 2548 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:22:29.979594 kubelet[2548]: E0307 01:22:29.979464 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:30.691238 systemd[1]: Created slice kubepods-besteffort-podebfcc8aa_0f73_449a_b1bf_e699f674ab48.slice - libcontainer container kubepods-besteffort-podebfcc8aa_0f73_449a_b1bf_e699f674ab48.slice. Mar 7 01:22:30.736876 kubelet[2548]: I0307 01:22:30.736803 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ebfcc8aa-0f73-449a-b1bf-e699f674ab48-kube-proxy\") pod \"kube-proxy-wkkgb\" (UID: \"ebfcc8aa-0f73-449a-b1bf-e699f674ab48\") " pod="kube-system/kube-proxy-wkkgb" Mar 7 01:22:30.736876 kubelet[2548]: I0307 01:22:30.736847 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebfcc8aa-0f73-449a-b1bf-e699f674ab48-xtables-lock\") pod \"kube-proxy-wkkgb\" (UID: \"ebfcc8aa-0f73-449a-b1bf-e699f674ab48\") " pod="kube-system/kube-proxy-wkkgb" Mar 7 01:22:30.737492 kubelet[2548]: I0307 01:22:30.736896 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebfcc8aa-0f73-449a-b1bf-e699f674ab48-lib-modules\") pod \"kube-proxy-wkkgb\" (UID: \"ebfcc8aa-0f73-449a-b1bf-e699f674ab48\") " pod="kube-system/kube-proxy-wkkgb" Mar 7 01:22:30.737492 kubelet[2548]: I0307 01:22:30.736975 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnnh\" (UniqueName: \"kubernetes.io/projected/ebfcc8aa-0f73-449a-b1bf-e699f674ab48-kube-api-access-7jnnh\") pod \"kube-proxy-wkkgb\" (UID: \"ebfcc8aa-0f73-449a-b1bf-e699f674ab48\") " pod="kube-system/kube-proxy-wkkgb" Mar 7 01:22:30.813616 systemd[1]: Created slice kubepods-besteffort-pod3dbeba58_836f_4163_a97c_af0d55bfea96.slice - libcontainer container kubepods-besteffort-pod3dbeba58_836f_4163_a97c_af0d55bfea96.slice. Mar 7 01:22:30.839282 kubelet[2548]: I0307 01:22:30.837951 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j69z\" (UniqueName: \"kubernetes.io/projected/3dbeba58-836f-4163-a97c-af0d55bfea96-kube-api-access-6j69z\") pod \"tigera-operator-6cf4cccc57-vzzsr\" (UID: \"3dbeba58-836f-4163-a97c-af0d55bfea96\") " pod="tigera-operator/tigera-operator-6cf4cccc57-vzzsr" Mar 7 01:22:30.839282 kubelet[2548]: I0307 01:22:30.838029 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3dbeba58-836f-4163-a97c-af0d55bfea96-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-vzzsr\" (UID: \"3dbeba58-836f-4163-a97c-af0d55bfea96\") " pod="tigera-operator/tigera-operator-6cf4cccc57-vzzsr" Mar 7 01:22:31.002234 kubelet[2548]: E0307 01:22:31.002107 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:31.003730 containerd[1465]: time="2026-03-07T01:22:31.003239867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wkkgb,Uid:ebfcc8aa-0f73-449a-b1bf-e699f674ab48,Namespace:kube-system,Attempt:0,}" Mar 7 01:22:31.028554 containerd[1465]: time="2026-03-07T01:22:31.028341540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:31.028554 containerd[1465]: time="2026-03-07T01:22:31.028388520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:31.028554 containerd[1465]: time="2026-03-07T01:22:31.028406310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:31.028554 containerd[1465]: time="2026-03-07T01:22:31.028495190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:31.053055 systemd[1]: Started cri-containerd-906caf9af463f7fbcda8ca70e3a9d80ebb1061a8cba6972dbc2af737c23be87d.scope - libcontainer container 906caf9af463f7fbcda8ca70e3a9d80ebb1061a8cba6972dbc2af737c23be87d. Mar 7 01:22:31.081187 containerd[1465]: time="2026-03-07T01:22:31.081035286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wkkgb,Uid:ebfcc8aa-0f73-449a-b1bf-e699f674ab48,Namespace:kube-system,Attempt:0,} returns sandbox id \"906caf9af463f7fbcda8ca70e3a9d80ebb1061a8cba6972dbc2af737c23be87d\"" Mar 7 01:22:31.084934 kubelet[2548]: E0307 01:22:31.084223 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:31.089713 containerd[1465]: time="2026-03-07T01:22:31.089686960Z" level=info msg="CreateContainer within sandbox \"906caf9af463f7fbcda8ca70e3a9d80ebb1061a8cba6972dbc2af737c23be87d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:22:31.116755 containerd[1465]: time="2026-03-07T01:22:31.116730354Z" level=info msg="CreateContainer within sandbox \"906caf9af463f7fbcda8ca70e3a9d80ebb1061a8cba6972dbc2af737c23be87d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4416c8fdcd20c725ef0f6fb49562eb9bb5859fa44f28fc23f5773e7ab39fde6\"" Mar 7 01:22:31.117499 containerd[1465]: time="2026-03-07T01:22:31.117467784Z" level=info msg="StartContainer for \"c4416c8fdcd20c725ef0f6fb49562eb9bb5859fa44f28fc23f5773e7ab39fde6\"" Mar 7 01:22:31.121237 containerd[1465]: time="2026-03-07T01:22:31.121210196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-vzzsr,Uid:3dbeba58-836f-4163-a97c-af0d55bfea96,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:22:31.145628 containerd[1465]: time="2026-03-07T01:22:31.145565418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:31.145863 containerd[1465]: time="2026-03-07T01:22:31.145790518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:31.145863 containerd[1465]: time="2026-03-07T01:22:31.145810458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:31.146105 containerd[1465]: time="2026-03-07T01:22:31.146065018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:31.152428 systemd[1]: Started cri-containerd-c4416c8fdcd20c725ef0f6fb49562eb9bb5859fa44f28fc23f5773e7ab39fde6.scope - libcontainer container c4416c8fdcd20c725ef0f6fb49562eb9bb5859fa44f28fc23f5773e7ab39fde6. Mar 7 01:22:31.173229 systemd[1]: Started cri-containerd-aef47c349df611396a223aaa252bdcdd62a031db2fd66ee9b0b81b09ebef12b5.scope - libcontainer container aef47c349df611396a223aaa252bdcdd62a031db2fd66ee9b0b81b09ebef12b5. Mar 7 01:22:31.199138 containerd[1465]: time="2026-03-07T01:22:31.198761425Z" level=info msg="StartContainer for \"c4416c8fdcd20c725ef0f6fb49562eb9bb5859fa44f28fc23f5773e7ab39fde6\" returns successfully" Mar 7 01:22:31.229757 containerd[1465]: time="2026-03-07T01:22:31.229686640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-vzzsr,Uid:3dbeba58-836f-4163-a97c-af0d55bfea96,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aef47c349df611396a223aaa252bdcdd62a031db2fd66ee9b0b81b09ebef12b5\"" Mar 7 01:22:31.233286 containerd[1465]: time="2026-03-07T01:22:31.233258592Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:22:31.667173 kubelet[2548]: E0307 01:22:31.666892 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:32.035391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270761088.mount: Deactivated successfully. Mar 7 01:22:32.900311 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:22:33.360683 containerd[1465]: time="2026-03-07T01:22:33.360552575Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:33.361705 containerd[1465]: time="2026-03-07T01:22:33.361547275Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:22:33.362782 containerd[1465]: time="2026-03-07T01:22:33.362531836Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:33.364488 containerd[1465]: time="2026-03-07T01:22:33.364447227Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:33.365187 containerd[1465]: time="2026-03-07T01:22:33.365151407Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.131855395s" Mar 7 01:22:33.365236 containerd[1465]: time="2026-03-07T01:22:33.365190397Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:22:33.372717 containerd[1465]: time="2026-03-07T01:22:33.372691461Z" level=info msg="CreateContainer within sandbox \"aef47c349df611396a223aaa252bdcdd62a031db2fd66ee9b0b81b09ebef12b5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:22:33.387769 containerd[1465]: time="2026-03-07T01:22:33.387671869Z" level=info msg="CreateContainer within sandbox \"aef47c349df611396a223aaa252bdcdd62a031db2fd66ee9b0b81b09ebef12b5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cc4a8264da717144cb2aafff8b63af3793ce7ead9d74d3193f13851be096b1b8\"" Mar 7 01:22:33.388959 containerd[1465]: time="2026-03-07T01:22:33.388482989Z" level=info msg="StartContainer for \"cc4a8264da717144cb2aafff8b63af3793ce7ead9d74d3193f13851be096b1b8\"" Mar 7 01:22:33.429071 systemd[1]: Started cri-containerd-cc4a8264da717144cb2aafff8b63af3793ce7ead9d74d3193f13851be096b1b8.scope - libcontainer container cc4a8264da717144cb2aafff8b63af3793ce7ead9d74d3193f13851be096b1b8. Mar 7 01:22:33.458631 containerd[1465]: time="2026-03-07T01:22:33.457855654Z" level=info msg="StartContainer for \"cc4a8264da717144cb2aafff8b63af3793ce7ead9d74d3193f13851be096b1b8\" returns successfully" Mar 7 01:22:33.688042 kubelet[2548]: I0307 01:22:33.687559 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-wkkgb" podStartSLOduration=3.687539758 podStartE2EDuration="3.687539758s" podCreationTimestamp="2026-03-07 01:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:22:31.679300085 +0000 UTC m=+8.199271538" watchObservedRunningTime="2026-03-07 01:22:33.687539758 +0000 UTC m=+10.207511191" Mar 7 01:22:35.505029 kubelet[2548]: E0307 01:22:35.504980 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:35.529468 kubelet[2548]: I0307 01:22:35.529401 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-vzzsr" podStartSLOduration=3.395282673 podStartE2EDuration="5.529387429s" podCreationTimestamp="2026-03-07 01:22:30 +0000 UTC" firstStartedPulling="2026-03-07 01:22:31.232213522 +0000 UTC m=+7.752184955" lastFinishedPulling="2026-03-07 01:22:33.366318278 +0000 UTC m=+9.886289711" observedRunningTime="2026-03-07 01:22:33.688835689 +0000 UTC m=+10.208807112" watchObservedRunningTime="2026-03-07 01:22:35.529387429 +0000 UTC m=+12.049358852" Mar 7 01:22:36.235954 kubelet[2548]: E0307 01:22:36.233358 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:39.163129 sudo[1690]: pam_unix(sudo:session): session closed for user root Mar 7 01:22:39.187875 sshd[1687]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:39.192158 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:22:39.195179 systemd[1]: sshd@6-172.232.28.122:22-68.220.241.50:59702.service: Deactivated successfully. Mar 7 01:22:39.202793 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:22:39.203023 systemd[1]: session-7.scope: Consumed 3.029s CPU time, 160.3M memory peak, 0B memory swap peak. Mar 7 01:22:39.206412 systemd-logind[1450]: Removed session 7. Mar 7 01:22:39.987845 kubelet[2548]: E0307 01:22:39.987782 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:41.539978 systemd[1]: Created slice kubepods-besteffort-pod29eddccc_9761_4c27_a448_8fd2ba4532cf.slice - libcontainer container kubepods-besteffort-pod29eddccc_9761_4c27_a448_8fd2ba4532cf.slice. Mar 7 01:22:41.603442 systemd[1]: Created slice kubepods-besteffort-pod74940bda_b9b7_49a0_8e37_4413fb31c2f2.slice - libcontainer container kubepods-besteffort-pod74940bda_b9b7_49a0_8e37_4413fb31c2f2.slice. Mar 7 01:22:41.606521 kubelet[2548]: I0307 01:22:41.606118 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/74940bda-b9b7-49a0-8e37-4413fb31c2f2-node-certs\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606521 kubelet[2548]: I0307 01:22:41.606152 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-sys-fs\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606521 kubelet[2548]: I0307 01:22:41.606170 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74940bda-b9b7-49a0-8e37-4413fb31c2f2-tigera-ca-bundle\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606521 kubelet[2548]: I0307 01:22:41.606183 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqg8\" (UniqueName: \"kubernetes.io/projected/74940bda-b9b7-49a0-8e37-4413fb31c2f2-kube-api-access-6zqg8\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606521 kubelet[2548]: I0307 01:22:41.606198 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-cni-bin-dir\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606930 kubelet[2548]: I0307 01:22:41.606210 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-policysync\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606930 kubelet[2548]: I0307 01:22:41.606222 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-var-run-calico\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606930 kubelet[2548]: I0307 01:22:41.606235 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zgcs\" (UniqueName: \"kubernetes.io/projected/29eddccc-9761-4c27-a448-8fd2ba4532cf-kube-api-access-6zgcs\") pod \"calico-typha-5984dbf96b-vqjn7\" (UID: \"29eddccc-9761-4c27-a448-8fd2ba4532cf\") " pod="calico-system/calico-typha-5984dbf96b-vqjn7" Mar 7 01:22:41.606930 kubelet[2548]: I0307 01:22:41.606248 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-cni-net-dir\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.606930 kubelet[2548]: I0307 01:22:41.606261 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-var-lib-calico\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.607050 kubelet[2548]: I0307 01:22:41.606272 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-cni-log-dir\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.607050 kubelet[2548]: I0307 01:22:41.606284 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-flexvol-driver-host\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.607050 kubelet[2548]: I0307 01:22:41.606297 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-lib-modules\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.607050 kubelet[2548]: I0307 01:22:41.606311 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/29eddccc-9761-4c27-a448-8fd2ba4532cf-typha-certs\") pod \"calico-typha-5984dbf96b-vqjn7\" (UID: \"29eddccc-9761-4c27-a448-8fd2ba4532cf\") " pod="calico-system/calico-typha-5984dbf96b-vqjn7" Mar 7 01:22:41.607050 kubelet[2548]: I0307 01:22:41.606324 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-bpffs\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.607152 kubelet[2548]: I0307 01:22:41.606337 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29eddccc-9761-4c27-a448-8fd2ba4532cf-tigera-ca-bundle\") pod \"calico-typha-5984dbf96b-vqjn7\" (UID: \"29eddccc-9761-4c27-a448-8fd2ba4532cf\") " pod="calico-system/calico-typha-5984dbf96b-vqjn7" Mar 7 01:22:41.607152 kubelet[2548]: I0307 01:22:41.606349 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-nodeproc\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.607152 kubelet[2548]: I0307 01:22:41.606362 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74940bda-b9b7-49a0-8e37-4413fb31c2f2-xtables-lock\") pod \"calico-node-x6qzq\" (UID: \"74940bda-b9b7-49a0-8e37-4413fb31c2f2\") " pod="calico-system/calico-node-x6qzq" Mar 7 01:22:41.704956 kubelet[2548]: E0307 01:22:41.703756 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69l94" podUID="3751f11e-67af-41ad-8416-aabd3cc9da2f" Mar 7 01:22:41.707591 kubelet[2548]: I0307 01:22:41.707563 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3751f11e-67af-41ad-8416-aabd3cc9da2f-kubelet-dir\") pod \"csi-node-driver-69l94\" (UID: \"3751f11e-67af-41ad-8416-aabd3cc9da2f\") " pod="calico-system/csi-node-driver-69l94" Mar 7 01:22:41.707591 kubelet[2548]: I0307 01:22:41.707590 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3751f11e-67af-41ad-8416-aabd3cc9da2f-registration-dir\") pod \"csi-node-driver-69l94\" (UID: \"3751f11e-67af-41ad-8416-aabd3cc9da2f\") " pod="calico-system/csi-node-driver-69l94" Mar 7 01:22:41.707680 kubelet[2548]: I0307 01:22:41.707606 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3751f11e-67af-41ad-8416-aabd3cc9da2f-socket-dir\") pod \"csi-node-driver-69l94\" (UID: \"3751f11e-67af-41ad-8416-aabd3cc9da2f\") " pod="calico-system/csi-node-driver-69l94" Mar 7 01:22:41.707680 kubelet[2548]: I0307 01:22:41.707622 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3751f11e-67af-41ad-8416-aabd3cc9da2f-varrun\") pod \"csi-node-driver-69l94\" (UID: \"3751f11e-67af-41ad-8416-aabd3cc9da2f\") " pod="calico-system/csi-node-driver-69l94" Mar 7 01:22:41.707680 kubelet[2548]: I0307 01:22:41.707661 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w97j\" (UniqueName: \"kubernetes.io/projected/3751f11e-67af-41ad-8416-aabd3cc9da2f-kube-api-access-6w97j\") pod \"csi-node-driver-69l94\" (UID: \"3751f11e-67af-41ad-8416-aabd3cc9da2f\") " pod="calico-system/csi-node-driver-69l94" Mar 7 01:22:41.718578 kubelet[2548]: E0307 01:22:41.717721 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.719036 kubelet[2548]: W0307 01:22:41.719019 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.719132 kubelet[2548]: E0307 01:22:41.719120 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.719602 kubelet[2548]: E0307 01:22:41.719590 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.719690 kubelet[2548]: W0307 01:22:41.719677 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.719754 kubelet[2548]: E0307 01:22:41.719731 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.720154 kubelet[2548]: E0307 01:22:41.720142 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.720236 kubelet[2548]: W0307 01:22:41.720224 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.720304 kubelet[2548]: E0307 01:22:41.720275 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.721605 kubelet[2548]: E0307 01:22:41.721589 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.721722 kubelet[2548]: W0307 01:22:41.721704 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.723085 kubelet[2548]: E0307 01:22:41.723067 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.723660 kubelet[2548]: E0307 01:22:41.723648 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.723799 kubelet[2548]: W0307 01:22:41.723774 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.723857 kubelet[2548]: E0307 01:22:41.723846 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.724374 kubelet[2548]: E0307 01:22:41.724362 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.724458 kubelet[2548]: W0307 01:22:41.724446 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.724515 kubelet[2548]: E0307 01:22:41.724493 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.725267 kubelet[2548]: E0307 01:22:41.725255 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.725420 kubelet[2548]: W0307 01:22:41.725302 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.725420 kubelet[2548]: E0307 01:22:41.725313 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.725720 kubelet[2548]: E0307 01:22:41.725708 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.725938 kubelet[2548]: W0307 01:22:41.725880 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.725938 kubelet[2548]: E0307 01:22:41.725901 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.726530 kubelet[2548]: E0307 01:22:41.726492 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.726530 kubelet[2548]: W0307 01:22:41.726503 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.726530 kubelet[2548]: E0307 01:22:41.726511 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.728444 kubelet[2548]: E0307 01:22:41.728381 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.728444 kubelet[2548]: W0307 01:22:41.728392 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.728444 kubelet[2548]: E0307 01:22:41.728402 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.728959 kubelet[2548]: E0307 01:22:41.728795 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.728959 kubelet[2548]: W0307 01:22:41.728805 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.728959 kubelet[2548]: E0307 01:22:41.728814 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.729376 kubelet[2548]: E0307 01:22:41.729364 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.729473 kubelet[2548]: W0307 01:22:41.729446 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.729473 kubelet[2548]: E0307 01:22:41.729460 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.729838 kubelet[2548]: E0307 01:22:41.729828 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.729927 kubelet[2548]: W0307 01:22:41.729895 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.730026 kubelet[2548]: E0307 01:22:41.729975 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.730349 kubelet[2548]: E0307 01:22:41.730300 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.730349 kubelet[2548]: W0307 01:22:41.730310 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.730349 kubelet[2548]: E0307 01:22:41.730331 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.735955 kubelet[2548]: E0307 01:22:41.735291 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.735955 kubelet[2548]: W0307 01:22:41.735304 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.735955 kubelet[2548]: E0307 01:22:41.735316 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.737592 kubelet[2548]: E0307 01:22:41.737525 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.737592 kubelet[2548]: W0307 01:22:41.737538 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.737592 kubelet[2548]: E0307 01:22:41.737569 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.757806 kubelet[2548]: E0307 01:22:41.757690 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.757806 kubelet[2548]: W0307 01:22:41.757737 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.757806 kubelet[2548]: E0307 01:22:41.757758 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.809435 kubelet[2548]: E0307 01:22:41.809316 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.809435 kubelet[2548]: W0307 01:22:41.809340 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.809435 kubelet[2548]: E0307 01:22:41.809360 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.809703 kubelet[2548]: E0307 01:22:41.809681 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.809703 kubelet[2548]: W0307 01:22:41.809698 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.809756 kubelet[2548]: E0307 01:22:41.809709 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.810024 kubelet[2548]: E0307 01:22:41.810005 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.810024 kubelet[2548]: W0307 01:22:41.810017 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.810024 kubelet[2548]: E0307 01:22:41.810026 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.811332 kubelet[2548]: E0307 01:22:41.810344 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.811332 kubelet[2548]: W0307 01:22:41.810357 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.811332 kubelet[2548]: E0307 01:22:41.810366 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.811332 kubelet[2548]: E0307 01:22:41.810612 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.811332 kubelet[2548]: W0307 01:22:41.810619 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.811332 kubelet[2548]: E0307 01:22:41.810627 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.811332 kubelet[2548]: E0307 01:22:41.810893 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.811332 kubelet[2548]: W0307 01:22:41.810901 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.811332 kubelet[2548]: E0307 01:22:41.810923 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.813007 kubelet[2548]: E0307 01:22:41.812983 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.813007 kubelet[2548]: W0307 01:22:41.813000 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.813078 kubelet[2548]: E0307 01:22:41.813012 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.813293 kubelet[2548]: E0307 01:22:41.813265 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.813293 kubelet[2548]: W0307 01:22:41.813282 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.813293 kubelet[2548]: E0307 01:22:41.813291 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.813634 kubelet[2548]: E0307 01:22:41.813610 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.813634 kubelet[2548]: W0307 01:22:41.813626 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.813634 kubelet[2548]: E0307 01:22:41.813635 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.814188 kubelet[2548]: E0307 01:22:41.813864 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.814188 kubelet[2548]: W0307 01:22:41.813876 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.814188 kubelet[2548]: E0307 01:22:41.813884 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.814497 kubelet[2548]: E0307 01:22:41.814472 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.814497 kubelet[2548]: W0307 01:22:41.814488 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.814497 kubelet[2548]: E0307 01:22:41.814497 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.815509 kubelet[2548]: E0307 01:22:41.814732 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.815509 kubelet[2548]: W0307 01:22:41.814744 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.815509 kubelet[2548]: E0307 01:22:41.814752 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.815509 kubelet[2548]: E0307 01:22:41.815019 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.815509 kubelet[2548]: W0307 01:22:41.815027 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.815509 kubelet[2548]: E0307 01:22:41.815035 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.815509 kubelet[2548]: E0307 01:22:41.815246 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.815509 kubelet[2548]: W0307 01:22:41.815253 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.815509 kubelet[2548]: E0307 01:22:41.815261 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.815509 kubelet[2548]: E0307 01:22:41.815490 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.815988 kubelet[2548]: W0307 01:22:41.815497 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.815988 kubelet[2548]: E0307 01:22:41.815505 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.816163 kubelet[2548]: E0307 01:22:41.816137 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.816163 kubelet[2548]: W0307 01:22:41.816153 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.816163 kubelet[2548]: E0307 01:22:41.816161 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.817169 kubelet[2548]: E0307 01:22:41.817139 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.817169 kubelet[2548]: W0307 01:22:41.817157 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.817169 kubelet[2548]: E0307 01:22:41.817166 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.817797 kubelet[2548]: E0307 01:22:41.817365 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.817797 kubelet[2548]: W0307 01:22:41.817376 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.817797 kubelet[2548]: E0307 01:22:41.817384 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.817797 kubelet[2548]: E0307 01:22:41.817710 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.817797 kubelet[2548]: W0307 01:22:41.817718 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.817797 kubelet[2548]: E0307 01:22:41.817726 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.819100 kubelet[2548]: E0307 01:22:41.818008 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.819100 kubelet[2548]: W0307 01:22:41.818016 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.819100 kubelet[2548]: E0307 01:22:41.818025 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.819100 kubelet[2548]: E0307 01:22:41.818744 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.819100 kubelet[2548]: W0307 01:22:41.818753 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.819100 kubelet[2548]: E0307 01:22:41.818762 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.819100 kubelet[2548]: E0307 01:22:41.819092 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.819100 kubelet[2548]: W0307 01:22:41.819100 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.819334 kubelet[2548]: E0307 01:22:41.819110 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.820320 kubelet[2548]: E0307 01:22:41.819795 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.820320 kubelet[2548]: W0307 01:22:41.819813 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.820320 kubelet[2548]: E0307 01:22:41.819833 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.820484 kubelet[2548]: E0307 01:22:41.820366 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.820484 kubelet[2548]: W0307 01:22:41.820375 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.820484 kubelet[2548]: E0307 01:22:41.820384 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.820758 kubelet[2548]: E0307 01:22:41.820708 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.820758 kubelet[2548]: W0307 01:22:41.820724 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.820758 kubelet[2548]: E0307 01:22:41.820732 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.829730 kubelet[2548]: E0307 01:22:41.829122 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:41.829730 kubelet[2548]: W0307 01:22:41.829136 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:41.829730 kubelet[2548]: E0307 01:22:41.829148 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:41.844269 kubelet[2548]: E0307 01:22:41.844220 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:41.845305 containerd[1465]: time="2026-03-07T01:22:41.845077514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5984dbf96b-vqjn7,Uid:29eddccc-9761-4c27-a448-8fd2ba4532cf,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:41.872330 containerd[1465]: time="2026-03-07T01:22:41.872240248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:41.872416 containerd[1465]: time="2026-03-07T01:22:41.872318258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:41.872416 containerd[1465]: time="2026-03-07T01:22:41.872333738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:41.872642 containerd[1465]: time="2026-03-07T01:22:41.872604928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:41.891090 systemd[1]: Started cri-containerd-1972cf4e901ea7f16a6333be1ef063ccf10b812ae63a5ef20468a57ac11f2b4d.scope - libcontainer container 1972cf4e901ea7f16a6333be1ef063ccf10b812ae63a5ef20468a57ac11f2b4d. Mar 7 01:22:41.914528 containerd[1465]: time="2026-03-07T01:22:41.914480489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6qzq,Uid:74940bda-b9b7-49a0-8e37-4413fb31c2f2,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:41.946180 containerd[1465]: time="2026-03-07T01:22:41.946079545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5984dbf96b-vqjn7,Uid:29eddccc-9761-4c27-a448-8fd2ba4532cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"1972cf4e901ea7f16a6333be1ef063ccf10b812ae63a5ef20468a57ac11f2b4d\"" Mar 7 01:22:41.948001 kubelet[2548]: E0307 01:22:41.946851 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:41.948447 containerd[1465]: time="2026-03-07T01:22:41.948414356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:22:41.957210 containerd[1465]: time="2026-03-07T01:22:41.956973620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:41.957210 containerd[1465]: time="2026-03-07T01:22:41.957020220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:41.957210 containerd[1465]: time="2026-03-07T01:22:41.957030550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:41.957210 containerd[1465]: time="2026-03-07T01:22:41.957114440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:41.977054 systemd[1]: Started cri-containerd-52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0.scope - libcontainer container 52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0. Mar 7 01:22:42.013396 containerd[1465]: time="2026-03-07T01:22:42.013355528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x6qzq,Uid:74940bda-b9b7-49a0-8e37-4413fb31c2f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\"" Mar 7 01:22:43.447538 containerd[1465]: time="2026-03-07T01:22:43.447482835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:43.448478 containerd[1465]: time="2026-03-07T01:22:43.448294605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:22:43.449191 containerd[1465]: time="2026-03-07T01:22:43.448953946Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:43.450746 containerd[1465]: time="2026-03-07T01:22:43.450723637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:43.451653 containerd[1465]: time="2026-03-07T01:22:43.451630267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.503138801s" Mar 7 01:22:43.451738 containerd[1465]: time="2026-03-07T01:22:43.451722307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:22:43.455843 containerd[1465]: time="2026-03-07T01:22:43.455791689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:22:43.472680 containerd[1465]: time="2026-03-07T01:22:43.472630278Z" level=info msg="CreateContainer within sandbox \"1972cf4e901ea7f16a6333be1ef063ccf10b812ae63a5ef20468a57ac11f2b4d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:22:43.484124 containerd[1465]: time="2026-03-07T01:22:43.484082063Z" level=info msg="CreateContainer within sandbox \"1972cf4e901ea7f16a6333be1ef063ccf10b812ae63a5ef20468a57ac11f2b4d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1ecd7e284ca1df037f5d8d8cedb50d05ea4f63bc10afe767ae0591b185191302\"" Mar 7 01:22:43.485139 containerd[1465]: time="2026-03-07T01:22:43.485018084Z" level=info msg="StartContainer for \"1ecd7e284ca1df037f5d8d8cedb50d05ea4f63bc10afe767ae0591b185191302\"" Mar 7 01:22:43.518062 systemd[1]: Started cri-containerd-1ecd7e284ca1df037f5d8d8cedb50d05ea4f63bc10afe767ae0591b185191302.scope - libcontainer container 1ecd7e284ca1df037f5d8d8cedb50d05ea4f63bc10afe767ae0591b185191302. Mar 7 01:22:43.561388 containerd[1465]: time="2026-03-07T01:22:43.561302362Z" level=info msg="StartContainer for \"1ecd7e284ca1df037f5d8d8cedb50d05ea4f63bc10afe767ae0591b185191302\" returns successfully" Mar 7 01:22:43.602208 kubelet[2548]: E0307 01:22:43.602150 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69l94" podUID="3751f11e-67af-41ad-8416-aabd3cc9da2f" Mar 7 01:22:43.707373 kubelet[2548]: E0307 01:22:43.707256 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:43.720792 kubelet[2548]: E0307 01:22:43.719398 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.720792 kubelet[2548]: W0307 01:22:43.719418 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.720792 kubelet[2548]: E0307 01:22:43.719438 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.720792 kubelet[2548]: E0307 01:22:43.719682 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.720792 kubelet[2548]: W0307 01:22:43.719690 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.720792 kubelet[2548]: E0307 01:22:43.719699 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.722200 kubelet[2548]: E0307 01:22:43.722179 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.722200 kubelet[2548]: W0307 01:22:43.722197 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.722293 kubelet[2548]: E0307 01:22:43.722208 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.724986 kubelet[2548]: E0307 01:22:43.724969 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.724986 kubelet[2548]: W0307 01:22:43.724983 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.725074 kubelet[2548]: E0307 01:22:43.724995 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.726972 kubelet[2548]: E0307 01:22:43.726083 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.726972 kubelet[2548]: W0307 01:22:43.726282 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.726972 kubelet[2548]: E0307 01:22:43.726291 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.726972 kubelet[2548]: E0307 01:22:43.726475 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.726972 kubelet[2548]: W0307 01:22:43.726483 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.726972 kubelet[2548]: E0307 01:22:43.726491 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.727161 kubelet[2548]: E0307 01:22:43.727084 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.727161 kubelet[2548]: W0307 01:22:43.727091 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.727161 kubelet[2548]: E0307 01:22:43.727100 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.728316 kubelet[2548]: E0307 01:22:43.728297 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.728316 kubelet[2548]: W0307 01:22:43.728309 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.728316 kubelet[2548]: E0307 01:22:43.728318 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.728794 kubelet[2548]: E0307 01:22:43.728515 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.728794 kubelet[2548]: W0307 01:22:43.728523 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.728794 kubelet[2548]: E0307 01:22:43.728531 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.728872 kubelet[2548]: E0307 01:22:43.728861 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.728872 kubelet[2548]: W0307 01:22:43.728870 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.728920 kubelet[2548]: E0307 01:22:43.728878 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.729797 kubelet[2548]: E0307 01:22:43.729599 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.729797 kubelet[2548]: W0307 01:22:43.729615 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.729797 kubelet[2548]: E0307 01:22:43.729623 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.730048 kubelet[2548]: E0307 01:22:43.729817 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.730048 kubelet[2548]: W0307 01:22:43.729824 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.730048 kubelet[2548]: E0307 01:22:43.729832 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.730365 kubelet[2548]: E0307 01:22:43.730087 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.730365 kubelet[2548]: W0307 01:22:43.730095 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.730365 kubelet[2548]: E0307 01:22:43.730103 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.730430 kubelet[2548]: E0307 01:22:43.730395 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.730430 kubelet[2548]: W0307 01:22:43.730403 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.730430 kubelet[2548]: E0307 01:22:43.730412 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.733928 kubelet[2548]: E0307 01:22:43.730668 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.733928 kubelet[2548]: W0307 01:22:43.730682 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.733928 kubelet[2548]: E0307 01:22:43.730690 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.733928 kubelet[2548]: E0307 01:22:43.731635 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.733928 kubelet[2548]: W0307 01:22:43.731644 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.733928 kubelet[2548]: E0307 01:22:43.731653 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.733928 kubelet[2548]: E0307 01:22:43.731906 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.733928 kubelet[2548]: W0307 01:22:43.731932 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.733928 kubelet[2548]: E0307 01:22:43.731940 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.733928 kubelet[2548]: E0307 01:22:43.732379 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.734331 kubelet[2548]: W0307 01:22:43.732387 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.734331 kubelet[2548]: E0307 01:22:43.732395 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.734331 kubelet[2548]: E0307 01:22:43.732662 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.734331 kubelet[2548]: W0307 01:22:43.732672 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.734331 kubelet[2548]: E0307 01:22:43.732680 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.734331 kubelet[2548]: E0307 01:22:43.733178 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.734331 kubelet[2548]: W0307 01:22:43.733187 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.734331 kubelet[2548]: E0307 01:22:43.733196 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.734331 kubelet[2548]: E0307 01:22:43.733432 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.734331 kubelet[2548]: W0307 01:22:43.733441 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.734514 kubelet[2548]: E0307 01:22:43.733449 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.734514 kubelet[2548]: E0307 01:22:43.733709 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.734514 kubelet[2548]: W0307 01:22:43.733719 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.734514 kubelet[2548]: E0307 01:22:43.733727 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.734514 kubelet[2548]: E0307 01:22:43.733993 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.734514 kubelet[2548]: W0307 01:22:43.734000 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.734514 kubelet[2548]: E0307 01:22:43.734008 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.734514 kubelet[2548]: E0307 01:22:43.734438 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.734514 kubelet[2548]: W0307 01:22:43.734446 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.734514 kubelet[2548]: E0307 01:22:43.734453 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.735943 kubelet[2548]: E0307 01:22:43.735196 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.735943 kubelet[2548]: W0307 01:22:43.735208 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.735943 kubelet[2548]: E0307 01:22:43.735217 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.735943 kubelet[2548]: E0307 01:22:43.735457 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.735943 kubelet[2548]: W0307 01:22:43.735465 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.735943 kubelet[2548]: E0307 01:22:43.735472 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.735943 kubelet[2548]: E0307 01:22:43.735699 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.735943 kubelet[2548]: W0307 01:22:43.735706 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.735943 kubelet[2548]: E0307 01:22:43.735714 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.736209 kubelet[2548]: E0307 01:22:43.735958 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.736209 kubelet[2548]: W0307 01:22:43.735966 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.736209 kubelet[2548]: E0307 01:22:43.735973 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.736297 kubelet[2548]: E0307 01:22:43.736235 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.736297 kubelet[2548]: W0307 01:22:43.736242 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.736297 kubelet[2548]: E0307 01:22:43.736250 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.736620 kubelet[2548]: E0307 01:22:43.736604 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.736620 kubelet[2548]: W0307 01:22:43.736618 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.738078 kubelet[2548]: E0307 01:22:43.736626 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.738078 kubelet[2548]: E0307 01:22:43.736859 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.738078 kubelet[2548]: W0307 01:22:43.736867 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.738078 kubelet[2548]: E0307 01:22:43.736874 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.738078 kubelet[2548]: E0307 01:22:43.737123 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.738078 kubelet[2548]: W0307 01:22:43.737130 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.738078 kubelet[2548]: E0307 01:22:43.737138 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:43.738078 kubelet[2548]: E0307 01:22:43.737557 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:22:43.738078 kubelet[2548]: W0307 01:22:43.737567 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:22:43.738078 kubelet[2548]: E0307 01:22:43.737575 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:22:44.127960 containerd[1465]: time="2026-03-07T01:22:44.127880085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:44.128694 containerd[1465]: time="2026-03-07T01:22:44.128654835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:22:44.129944 containerd[1465]: time="2026-03-07T01:22:44.129153986Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:44.131371 containerd[1465]: time="2026-03-07T01:22:44.131331697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:44.132750 containerd[1465]: time="2026-03-07T01:22:44.132270927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 676.453308ms" Mar 7 01:22:44.132750 containerd[1465]: time="2026-03-07T01:22:44.132320897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:22:44.138053 containerd[1465]: time="2026-03-07T01:22:44.137940970Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:22:44.152890 containerd[1465]: time="2026-03-07T01:22:44.152843847Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873\"" Mar 7 01:22:44.154421 containerd[1465]: time="2026-03-07T01:22:44.154250718Z" level=info msg="StartContainer for \"59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873\"" Mar 7 01:22:44.201053 systemd[1]: Started cri-containerd-59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873.scope - libcontainer container 59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873. Mar 7 01:22:44.232117 containerd[1465]: time="2026-03-07T01:22:44.232074677Z" level=info msg="StartContainer for \"59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873\" returns successfully" Mar 7 01:22:44.249283 systemd[1]: cri-containerd-59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873.scope: Deactivated successfully. Mar 7 01:22:44.281938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873-rootfs.mount: Deactivated successfully. Mar 7 01:22:44.371657 containerd[1465]: time="2026-03-07T01:22:44.371600947Z" level=info msg="shim disconnected" id=59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873 namespace=k8s.io Mar 7 01:22:44.371657 containerd[1465]: time="2026-03-07T01:22:44.371643897Z" level=warning msg="cleaning up after shim disconnected" id=59c0581bac1c00e6db0fbd7d0723927d0bd6912e56fbe3c43194ecb7103f5873 namespace=k8s.io Mar 7 01:22:44.371657 containerd[1465]: time="2026-03-07T01:22:44.371653517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:22:44.711714 kubelet[2548]: I0307 01:22:44.711677 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:22:44.712355 kubelet[2548]: E0307 01:22:44.711980 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:44.714124 containerd[1465]: time="2026-03-07T01:22:44.714005698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:22:44.727808 kubelet[2548]: I0307 01:22:44.727605 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5984dbf96b-vqjn7" podStartSLOduration=2.222572513 podStartE2EDuration="3.727593875s" podCreationTimestamp="2026-03-07 01:22:41 +0000 UTC" firstStartedPulling="2026-03-07 01:22:41.947855716 +0000 UTC m=+18.467827149" lastFinishedPulling="2026-03-07 01:22:43.452877088 +0000 UTC m=+19.972848511" observedRunningTime="2026-03-07 01:22:43.754148998 +0000 UTC m=+20.274120421" watchObservedRunningTime="2026-03-07 01:22:44.727593875 +0000 UTC m=+21.247565298" Mar 7 01:22:45.603354 kubelet[2548]: E0307 01:22:45.603272 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69l94" podUID="3751f11e-67af-41ad-8416-aabd3cc9da2f" Mar 7 01:22:46.657904 update_engine[1453]: I20260307 01:22:46.656410 1453 update_attempter.cc:509] Updating boot flags... Mar 7 01:22:46.760021 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3257) Mar 7 01:22:46.862007 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3261) Mar 7 01:22:46.980015 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3261) Mar 7 01:22:47.601422 kubelet[2548]: E0307 01:22:47.601385 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69l94" podUID="3751f11e-67af-41ad-8416-aabd3cc9da2f" Mar 7 01:22:48.724820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227261173.mount: Deactivated successfully. Mar 7 01:22:48.753093 containerd[1465]: time="2026-03-07T01:22:48.753049056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:48.753872 containerd[1465]: time="2026-03-07T01:22:48.753824746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:22:48.755366 containerd[1465]: time="2026-03-07T01:22:48.754318167Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:48.756957 containerd[1465]: time="2026-03-07T01:22:48.755895307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:48.756957 containerd[1465]: time="2026-03-07T01:22:48.756618618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.04256904s" Mar 7 01:22:48.756957 containerd[1465]: time="2026-03-07T01:22:48.756641448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:22:48.765643 containerd[1465]: time="2026-03-07T01:22:48.765588592Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:22:48.778815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950833633.mount: Deactivated successfully. Mar 7 01:22:48.782522 containerd[1465]: time="2026-03-07T01:22:48.782498551Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e\"" Mar 7 01:22:48.783035 containerd[1465]: time="2026-03-07T01:22:48.782991191Z" level=info msg="StartContainer for \"5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e\"" Mar 7 01:22:48.814069 systemd[1]: Started cri-containerd-5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e.scope - libcontainer container 5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e. Mar 7 01:22:48.830033 systemd-timesyncd[1400]: Timed out waiting for reply from [2600:1702:80c0:9a80:1ee4:b0a2:44bc:c606]:123 (2.flatcar.pool.ntp.org). Mar 7 01:22:48.839434 containerd[1465]: time="2026-03-07T01:22:48.839373339Z" level=info msg="StartContainer for \"5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e\" returns successfully" Mar 7 01:22:49.649761 systemd-resolved[1372]: Clock change detected. Flushing caches. Mar 7 01:22:49.650582 systemd-timesyncd[1400]: Contacted time server [2602:f9f3:1:2f::6:123]:123 (2.flatcar.pool.ntp.org). Mar 7 01:22:49.650661 systemd-timesyncd[1400]: Initial clock synchronization to Sat 2026-03-07 01:22:49.649713 UTC. Mar 7 01:22:49.672087 systemd[1]: cri-containerd-5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e.scope: Deactivated successfully. Mar 7 01:22:49.803725 containerd[1465]: time="2026-03-07T01:22:49.802907909Z" level=info msg="shim disconnected" id=5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e namespace=k8s.io Mar 7 01:22:49.803725 containerd[1465]: time="2026-03-07T01:22:49.802966389Z" level=warning msg="cleaning up after shim disconnected" id=5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e namespace=k8s.io Mar 7 01:22:49.803725 containerd[1465]: time="2026-03-07T01:22:49.802976819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:22:50.391598 kubelet[2548]: E0307 01:22:50.391559 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69l94" podUID="3751f11e-67af-41ad-8416-aabd3cc9da2f" Mar 7 01:22:50.519279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dc242c0876dc69272c08f260bea425f6de3183e459b89a638fc7df3713bf29e-rootfs.mount: Deactivated successfully. Mar 7 01:22:50.524160 containerd[1465]: time="2026-03-07T01:22:50.523013389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:22:52.303055 containerd[1465]: time="2026-03-07T01:22:52.302201348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:52.303055 containerd[1465]: time="2026-03-07T01:22:52.303014649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:22:52.303917 containerd[1465]: time="2026-03-07T01:22:52.303874249Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:52.309315 containerd[1465]: time="2026-03-07T01:22:52.309288912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:52.310309 containerd[1465]: time="2026-03-07T01:22:52.310287262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.787030363s" Mar 7 01:22:52.310385 containerd[1465]: time="2026-03-07T01:22:52.310369772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:22:52.316773 containerd[1465]: time="2026-03-07T01:22:52.316724505Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:22:52.339437 containerd[1465]: time="2026-03-07T01:22:52.339394377Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790\"" Mar 7 01:22:52.340338 containerd[1465]: time="2026-03-07T01:22:52.340066077Z" level=info msg="StartContainer for \"8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790\"" Mar 7 01:22:52.384790 systemd[1]: Started cri-containerd-8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790.scope - libcontainer container 8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790. Mar 7 01:22:52.393252 kubelet[2548]: E0307 01:22:52.392967 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69l94" podUID="3751f11e-67af-41ad-8416-aabd3cc9da2f" Mar 7 01:22:52.420416 containerd[1465]: time="2026-03-07T01:22:52.420377657Z" level=info msg="StartContainer for \"8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790\" returns successfully" Mar 7 01:22:52.986833 containerd[1465]: time="2026-03-07T01:22:52.986791410Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:22:52.990434 systemd[1]: cri-containerd-8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790.scope: Deactivated successfully. Mar 7 01:22:53.017564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790-rootfs.mount: Deactivated successfully. Mar 7 01:22:53.068489 kubelet[2548]: I0307 01:22:53.067999 2548 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 01:22:53.082706 containerd[1465]: time="2026-03-07T01:22:53.082641748Z" level=info msg="shim disconnected" id=8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790 namespace=k8s.io Mar 7 01:22:53.082706 containerd[1465]: time="2026-03-07T01:22:53.082701058Z" level=warning msg="cleaning up after shim disconnected" id=8b08b8d58839bcc3e8743c9e9d8ae5243f359df756ff4d410c08d915e6cd3790 namespace=k8s.io Mar 7 01:22:53.082831 containerd[1465]: time="2026-03-07T01:22:53.082710838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:22:53.132484 systemd[1]: Created slice kubepods-burstable-podcb5fbdd0_b27a_4ac5_bfce_f97041d5d5e0.slice - libcontainer container kubepods-burstable-podcb5fbdd0_b27a_4ac5_bfce_f97041d5d5e0.slice. Mar 7 01:22:53.141521 systemd[1]: Created slice kubepods-burstable-pod52fafc1f_6106_4d15_bfa6_45d2f9cd684f.slice - libcontainer container kubepods-burstable-pod52fafc1f_6106_4d15_bfa6_45d2f9cd684f.slice. Mar 7 01:22:53.149892 systemd[1]: Created slice kubepods-besteffort-pod15881307_f8ce_4e05_8cbd_e62d67c74c8e.slice - libcontainer container kubepods-besteffort-pod15881307_f8ce_4e05_8cbd_e62d67c74c8e.slice. Mar 7 01:22:53.161222 systemd[1]: Created slice kubepods-besteffort-poda4e013e7_4a81_4a32_a9dd_da65e551cd48.slice - libcontainer container kubepods-besteffort-poda4e013e7_4a81_4a32_a9dd_da65e551cd48.slice. Mar 7 01:22:53.166663 systemd[1]: Created slice kubepods-besteffort-pod4ebdd347_5ce7_4d0e_95dd_1d2bf0c987de.slice - libcontainer container kubepods-besteffort-pod4ebdd347_5ce7_4d0e_95dd_1d2bf0c987de.slice. Mar 7 01:22:53.175787 systemd[1]: Created slice kubepods-besteffort-pod7aac9be5_f2a8_4947_a1f7_e67a1f82abd2.slice - libcontainer container kubepods-besteffort-pod7aac9be5_f2a8_4947_a1f7_e67a1f82abd2.slice. Mar 7 01:22:53.182099 systemd[1]: Created slice kubepods-besteffort-podcaf9c03d_2067_4e54_a6ed_0d88a6e481b4.slice - libcontainer container kubepods-besteffort-podcaf9c03d_2067_4e54_a6ed_0d88a6e481b4.slice. Mar 7 01:22:53.191866 kubelet[2548]: I0307 01:22:53.191834 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aac9be5-f2a8-4947-a1f7-e67a1f82abd2-config\") pod \"goldmane-9f7667bb8-7qq69\" (UID: \"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2\") " pod="calico-system/goldmane-9f7667bb8-7qq69" Mar 7 01:22:53.191947 kubelet[2548]: I0307 01:22:53.191872 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vlzd\" (UniqueName: \"kubernetes.io/projected/4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de-kube-api-access-4vlzd\") pod \"calico-kube-controllers-599474f6f5-25hl4\" (UID: \"4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de\") " pod="calico-system/calico-kube-controllers-599474f6f5-25hl4" Mar 7 01:22:53.191947 kubelet[2548]: I0307 01:22:53.191890 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52fafc1f-6106-4d15-bfa6-45d2f9cd684f-config-volume\") pod \"coredns-7d764666f9-db7dz\" (UID: \"52fafc1f-6106-4d15-bfa6-45d2f9cd684f\") " pod="kube-system/coredns-7d764666f9-db7dz" Mar 7 01:22:53.191947 kubelet[2548]: I0307 01:22:53.191905 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aac9be5-f2a8-4947-a1f7-e67a1f82abd2-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-7qq69\" (UID: \"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2\") " pod="calico-system/goldmane-9f7667bb8-7qq69" Mar 7 01:22:53.191947 kubelet[2548]: I0307 01:22:53.191919 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7aac9be5-f2a8-4947-a1f7-e67a1f82abd2-goldmane-key-pair\") pod \"goldmane-9f7667bb8-7qq69\" (UID: \"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2\") " pod="calico-system/goldmane-9f7667bb8-7qq69" Mar 7 01:22:53.191947 kubelet[2548]: I0307 01:22:53.191938 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4dp\" (UniqueName: \"kubernetes.io/projected/a4e013e7-4a81-4a32-a9dd-da65e551cd48-kube-api-access-qp4dp\") pod \"calico-apiserver-57bff9d745-p4brr\" (UID: \"a4e013e7-4a81-4a32-a9dd-da65e551cd48\") " pod="calico-system/calico-apiserver-57bff9d745-p4brr" Mar 7 01:22:53.192068 kubelet[2548]: I0307 01:22:53.191952 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-backend-key-pair\") pod \"whisker-57685c8f89-t66t7\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " pod="calico-system/whisker-57685c8f89-t66t7" Mar 7 01:22:53.192068 kubelet[2548]: I0307 01:22:53.191969 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84skj\" (UniqueName: \"kubernetes.io/projected/52fafc1f-6106-4d15-bfa6-45d2f9cd684f-kube-api-access-84skj\") pod \"coredns-7d764666f9-db7dz\" (UID: \"52fafc1f-6106-4d15-bfa6-45d2f9cd684f\") " pod="kube-system/coredns-7d764666f9-db7dz" Mar 7 01:22:53.192068 kubelet[2548]: I0307 01:22:53.191983 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15881307-f8ce-4e05-8cbd-e62d67c74c8e-calico-apiserver-certs\") pod \"calico-apiserver-57bff9d745-6q7rc\" (UID: \"15881307-f8ce-4e05-8cbd-e62d67c74c8e\") " pod="calico-system/calico-apiserver-57bff9d745-6q7rc" Mar 7 01:22:53.192068 kubelet[2548]: I0307 01:22:53.191998 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj5vh\" (UniqueName: \"kubernetes.io/projected/7aac9be5-f2a8-4947-a1f7-e67a1f82abd2-kube-api-access-xj5vh\") pod \"goldmane-9f7667bb8-7qq69\" (UID: \"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2\") " pod="calico-system/goldmane-9f7667bb8-7qq69" Mar 7 01:22:53.192068 kubelet[2548]: I0307 01:22:53.192012 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-nginx-config\") pod \"whisker-57685c8f89-t66t7\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " pod="calico-system/whisker-57685c8f89-t66t7" Mar 7 01:22:53.192175 kubelet[2548]: I0307 01:22:53.192029 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de-tigera-ca-bundle\") pod \"calico-kube-controllers-599474f6f5-25hl4\" (UID: \"4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de\") " pod="calico-system/calico-kube-controllers-599474f6f5-25hl4" Mar 7 01:22:53.192175 kubelet[2548]: I0307 01:22:53.192042 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfl9\" (UniqueName: \"kubernetes.io/projected/15881307-f8ce-4e05-8cbd-e62d67c74c8e-kube-api-access-pgfl9\") pod \"calico-apiserver-57bff9d745-6q7rc\" (UID: \"15881307-f8ce-4e05-8cbd-e62d67c74c8e\") " pod="calico-system/calico-apiserver-57bff9d745-6q7rc" Mar 7 01:22:53.192175 kubelet[2548]: I0307 01:22:53.192058 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0-config-volume\") pod \"coredns-7d764666f9-zdzmq\" (UID: \"cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0\") " pod="kube-system/coredns-7d764666f9-zdzmq" Mar 7 01:22:53.192175 kubelet[2548]: I0307 01:22:53.192071 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-ca-bundle\") pod \"whisker-57685c8f89-t66t7\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " pod="calico-system/whisker-57685c8f89-t66t7" Mar 7 01:22:53.192175 kubelet[2548]: I0307 01:22:53.192091 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-279mr\" (UniqueName: \"kubernetes.io/projected/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-kube-api-access-279mr\") pod \"whisker-57685c8f89-t66t7\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " pod="calico-system/whisker-57685c8f89-t66t7" Mar 7 01:22:53.192280 kubelet[2548]: I0307 01:22:53.192108 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4e013e7-4a81-4a32-a9dd-da65e551cd48-calico-apiserver-certs\") pod \"calico-apiserver-57bff9d745-p4brr\" (UID: \"a4e013e7-4a81-4a32-a9dd-da65e551cd48\") " pod="calico-system/calico-apiserver-57bff9d745-p4brr" Mar 7 01:22:53.192280 kubelet[2548]: I0307 01:22:53.192122 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwp86\" (UniqueName: \"kubernetes.io/projected/cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0-kube-api-access-qwp86\") pod \"coredns-7d764666f9-zdzmq\" (UID: \"cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0\") " pod="kube-system/coredns-7d764666f9-zdzmq" Mar 7 01:22:53.439815 kubelet[2548]: E0307 01:22:53.439745 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:53.441005 containerd[1465]: time="2026-03-07T01:22:53.440868997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zdzmq,Uid:cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0,Namespace:kube-system,Attempt:0,}" Mar 7 01:22:53.448685 kubelet[2548]: E0307 01:22:53.447675 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:53.448764 containerd[1465]: time="2026-03-07T01:22:53.448285871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-db7dz,Uid:52fafc1f-6106-4d15-bfa6-45d2f9cd684f,Namespace:kube-system,Attempt:0,}" Mar 7 01:22:53.461305 containerd[1465]: time="2026-03-07T01:22:53.461167937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-6q7rc,Uid:15881307-f8ce-4e05-8cbd-e62d67c74c8e,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:53.468544 containerd[1465]: time="2026-03-07T01:22:53.468208011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-p4brr,Uid:a4e013e7-4a81-4a32-a9dd-da65e551cd48,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:53.473654 containerd[1465]: time="2026-03-07T01:22:53.473608653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599474f6f5-25hl4,Uid:4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:53.481300 containerd[1465]: time="2026-03-07T01:22:53.481226307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-7qq69,Uid:7aac9be5-f2a8-4947-a1f7-e67a1f82abd2,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:53.489133 containerd[1465]: time="2026-03-07T01:22:53.489014381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57685c8f89-t66t7,Uid:caf9c03d-2067-4e54-a6ed-0d88a6e481b4,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:53.590487 containerd[1465]: time="2026-03-07T01:22:53.590294492Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:22:53.636868 containerd[1465]: time="2026-03-07T01:22:53.636804535Z" level=info msg="CreateContainer within sandbox \"52c667602156851c9cc91afba2c00f95f38d951c7b5d257cab18fda36773f8b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f2dc075017292c423373188977ff5e67ecbe15cacb368503c7370e754c84547a\"" Mar 7 01:22:53.641428 containerd[1465]: time="2026-03-07T01:22:53.641396727Z" level=info msg="StartContainer for \"f2dc075017292c423373188977ff5e67ecbe15cacb368503c7370e754c84547a\"" Mar 7 01:22:53.672894 containerd[1465]: time="2026-03-07T01:22:53.672849313Z" level=error msg="Failed to destroy network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.673575 containerd[1465]: time="2026-03-07T01:22:53.673551873Z" level=error msg="encountered an error cleaning up failed sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.673759 containerd[1465]: time="2026-03-07T01:22:53.673738593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-db7dz,Uid:52fafc1f-6106-4d15-bfa6-45d2f9cd684f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.674548 kubelet[2548]: E0307 01:22:53.674278 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.674548 kubelet[2548]: E0307 01:22:53.674357 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-db7dz" Mar 7 01:22:53.674548 kubelet[2548]: E0307 01:22:53.674376 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-db7dz" Mar 7 01:22:53.674690 kubelet[2548]: E0307 01:22:53.674433 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-db7dz_kube-system(52fafc1f-6106-4d15-bfa6-45d2f9cd684f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-db7dz_kube-system(52fafc1f-6106-4d15-bfa6-45d2f9cd684f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-db7dz" podUID="52fafc1f-6106-4d15-bfa6-45d2f9cd684f" Mar 7 01:22:53.715381 containerd[1465]: time="2026-03-07T01:22:53.715241764Z" level=error msg="Failed to destroy network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.717559 containerd[1465]: time="2026-03-07T01:22:53.717416165Z" level=error msg="encountered an error cleaning up failed sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.717559 containerd[1465]: time="2026-03-07T01:22:53.717465905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-6q7rc,Uid:15881307-f8ce-4e05-8cbd-e62d67c74c8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.718216 kubelet[2548]: E0307 01:22:53.717805 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.718216 kubelet[2548]: E0307 01:22:53.717850 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57bff9d745-6q7rc" Mar 7 01:22:53.718216 kubelet[2548]: E0307 01:22:53.717869 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57bff9d745-6q7rc" Mar 7 01:22:53.718368 kubelet[2548]: E0307 01:22:53.717922 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57bff9d745-6q7rc_calico-system(15881307-f8ce-4e05-8cbd-e62d67c74c8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57bff9d745-6q7rc_calico-system(15881307-f8ce-4e05-8cbd-e62d67c74c8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57bff9d745-6q7rc" podUID="15881307-f8ce-4e05-8cbd-e62d67c74c8e" Mar 7 01:22:53.745893 containerd[1465]: time="2026-03-07T01:22:53.745506809Z" level=error msg="Failed to destroy network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.746572 containerd[1465]: time="2026-03-07T01:22:53.746547970Z" level=error msg="encountered an error cleaning up failed sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.746729 containerd[1465]: time="2026-03-07T01:22:53.746677300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-7qq69,Uid:7aac9be5-f2a8-4947-a1f7-e67a1f82abd2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.747421 kubelet[2548]: E0307 01:22:53.747342 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.747558 kubelet[2548]: E0307 01:22:53.747396 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-7qq69" Mar 7 01:22:53.747558 kubelet[2548]: E0307 01:22:53.747514 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-7qq69" Mar 7 01:22:53.747825 kubelet[2548]: E0307 01:22:53.747747 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-7qq69_calico-system(7aac9be5-f2a8-4947-a1f7-e67a1f82abd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-7qq69_calico-system(7aac9be5-f2a8-4947-a1f7-e67a1f82abd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-7qq69" podUID="7aac9be5-f2a8-4947-a1f7-e67a1f82abd2" Mar 7 01:22:53.769715 systemd[1]: Started cri-containerd-f2dc075017292c423373188977ff5e67ecbe15cacb368503c7370e754c84547a.scope - libcontainer container f2dc075017292c423373188977ff5e67ecbe15cacb368503c7370e754c84547a. Mar 7 01:22:53.772585 containerd[1465]: time="2026-03-07T01:22:53.772400743Z" level=error msg="Failed to destroy network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.774273 containerd[1465]: time="2026-03-07T01:22:53.774199624Z" level=error msg="encountered an error cleaning up failed sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.774333 containerd[1465]: time="2026-03-07T01:22:53.774276634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-p4brr,Uid:a4e013e7-4a81-4a32-a9dd-da65e551cd48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.774849 kubelet[2548]: E0307 01:22:53.774796 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.774899 kubelet[2548]: E0307 01:22:53.774855 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57bff9d745-p4brr" Mar 7 01:22:53.774899 kubelet[2548]: E0307 01:22:53.774876 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-57bff9d745-p4brr" Mar 7 01:22:53.774969 kubelet[2548]: E0307 01:22:53.774927 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57bff9d745-p4brr_calico-system(a4e013e7-4a81-4a32-a9dd-da65e551cd48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57bff9d745-p4brr_calico-system(a4e013e7-4a81-4a32-a9dd-da65e551cd48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-57bff9d745-p4brr" podUID="a4e013e7-4a81-4a32-a9dd-da65e551cd48" Mar 7 01:22:53.781255 containerd[1465]: time="2026-03-07T01:22:53.781184597Z" level=error msg="Failed to destroy network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.781780 containerd[1465]: time="2026-03-07T01:22:53.781750507Z" level=error msg="encountered an error cleaning up failed sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.782570 containerd[1465]: time="2026-03-07T01:22:53.781863887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zdzmq,Uid:cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.782610 kubelet[2548]: E0307 01:22:53.782244 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.782610 kubelet[2548]: E0307 01:22:53.782295 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-zdzmq" Mar 7 01:22:53.782610 kubelet[2548]: E0307 01:22:53.782316 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-zdzmq" Mar 7 01:22:53.782705 kubelet[2548]: E0307 01:22:53.782359 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-zdzmq_kube-system(cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-zdzmq_kube-system(cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-zdzmq" podUID="cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0" Mar 7 01:22:53.802300 containerd[1465]: time="2026-03-07T01:22:53.801902277Z" level=error msg="Failed to destroy network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.802300 containerd[1465]: time="2026-03-07T01:22:53.802224168Z" level=error msg="encountered an error cleaning up failed sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.802610 containerd[1465]: time="2026-03-07T01:22:53.802585218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599474f6f5-25hl4,Uid:4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.803241 kubelet[2548]: E0307 01:22:53.803218 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.803365 kubelet[2548]: E0307 01:22:53.803349 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-599474f6f5-25hl4" Mar 7 01:22:53.804414 kubelet[2548]: E0307 01:22:53.803512 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-599474f6f5-25hl4" Mar 7 01:22:53.804414 kubelet[2548]: E0307 01:22:53.803564 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-599474f6f5-25hl4_calico-system(4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-599474f6f5-25hl4_calico-system(4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-599474f6f5-25hl4" podUID="4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de" Mar 7 01:22:53.824805 containerd[1465]: time="2026-03-07T01:22:53.824763979Z" level=error msg="Failed to destroy network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.825348 containerd[1465]: time="2026-03-07T01:22:53.825322609Z" level=error msg="encountered an error cleaning up failed sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.825424 containerd[1465]: time="2026-03-07T01:22:53.825369859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57685c8f89-t66t7,Uid:caf9c03d-2067-4e54-a6ed-0d88a6e481b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.825812 kubelet[2548]: E0307 01:22:53.825558 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:22:53.825812 kubelet[2548]: E0307 01:22:53.825608 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57685c8f89-t66t7" Mar 7 01:22:53.825812 kubelet[2548]: E0307 01:22:53.825707 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57685c8f89-t66t7" Mar 7 01:22:53.826069 kubelet[2548]: E0307 01:22:53.825779 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57685c8f89-t66t7_calico-system(caf9c03d-2067-4e54-a6ed-0d88a6e481b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57685c8f89-t66t7_calico-system(caf9c03d-2067-4e54-a6ed-0d88a6e481b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57685c8f89-t66t7" podUID="caf9c03d-2067-4e54-a6ed-0d88a6e481b4" Mar 7 01:22:53.826261 containerd[1465]: time="2026-03-07T01:22:53.826239940Z" level=info msg="StartContainer for \"f2dc075017292c423373188977ff5e67ecbe15cacb368503c7370e754c84547a\" returns successfully" Mar 7 01:22:54.212135 kubelet[2548]: E0307 01:22:54.211843 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:54.397270 systemd[1]: Created slice kubepods-besteffort-pod3751f11e_67af_41ad_8416_aabd3cc9da2f.slice - libcontainer container kubepods-besteffort-pod3751f11e_67af_41ad_8416_aabd3cc9da2f.slice. Mar 7 01:22:54.402228 containerd[1465]: time="2026-03-07T01:22:54.402188027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-69l94,Uid:3751f11e-67af-41ad-8416-aabd3cc9da2f,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:54.523281 systemd-networkd[1370]: cali2ed9a10caaa: Link UP Mar 7 01:22:54.523522 systemd-networkd[1370]: cali2ed9a10caaa: Gained carrier Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.440 [ERROR][3662] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.457 [INFO][3662] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-csi--node--driver--69l94-eth0 csi-node-driver- calico-system 3751f11e-67af-41ad-8416-aabd3cc9da2f 710 0 2026-03-07 01:22:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-28-122 csi-node-driver-69l94 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2ed9a10caaa [] [] }} ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.457 [INFO][3662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-eth0" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.483 [INFO][3674] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" HandleID="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Workload="172--232--28--122-k8s-csi--node--driver--69l94-eth0" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.490 [INFO][3674] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" HandleID="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Workload="172--232--28--122-k8s-csi--node--driver--69l94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-28-122", "pod":"csi-node-driver-69l94", "timestamp":"2026-03-07 01:22:54.483579518 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000543340)} Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.490 [INFO][3674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.490 [INFO][3674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.491 [INFO][3674] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.492 [INFO][3674] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.496 [INFO][3674] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.499 [INFO][3674] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.501 [INFO][3674] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.502 [INFO][3674] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.502 [INFO][3674] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.504 [INFO][3674] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4 Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.507 [INFO][3674] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.510 [INFO][3674] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.1/26] block=192.168.75.0/26 handle="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.510 [INFO][3674] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.1/26] handle="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" host="172-232-28-122" Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.510 [INFO][3674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:54.539305 containerd[1465]: 2026-03-07 01:22:54.510 [INFO][3674] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.1/26] IPv6=[] ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" HandleID="k8s-pod-network.275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Workload="172--232--28--122-k8s-csi--node--driver--69l94-eth0" Mar 7 01:22:54.540539 containerd[1465]: 2026-03-07 01:22:54.515 [INFO][3662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-csi--node--driver--69l94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3751f11e-67af-41ad-8416-aabd3cc9da2f", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"csi-node-driver-69l94", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ed9a10caaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:54.540539 containerd[1465]: 2026-03-07 01:22:54.515 [INFO][3662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.1/32] ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-eth0" Mar 7 01:22:54.540539 containerd[1465]: 2026-03-07 01:22:54.515 [INFO][3662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ed9a10caaa ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-eth0" Mar 7 01:22:54.540539 containerd[1465]: 2026-03-07 01:22:54.524 [INFO][3662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-eth0" Mar 7 01:22:54.540539 containerd[1465]: 2026-03-07 01:22:54.524 [INFO][3662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-csi--node--driver--69l94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3751f11e-67af-41ad-8416-aabd3cc9da2f", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4", Pod:"csi-node-driver-69l94", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ed9a10caaa", MAC:"e2:c8:12:be:2c:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:54.540539 containerd[1465]: 2026-03-07 01:22:54.535 [INFO][3662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4" Namespace="calico-system" Pod="csi-node-driver-69l94" WorkloadEndpoint="172--232--28--122-k8s-csi--node--driver--69l94-eth0" Mar 7 01:22:54.544746 kubelet[2548]: I0307 01:22:54.544537 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:22:54.548007 containerd[1465]: time="2026-03-07T01:22:54.547980370Z" level=info msg="StopPodSandbox for \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\"" Mar 7 01:22:54.548146 containerd[1465]: time="2026-03-07T01:22:54.548124510Z" level=info msg="Ensure that sandbox 96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a in task-service has been cleanup successfully" Mar 7 01:22:54.556788 kubelet[2548]: I0307 01:22:54.556767 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:22:54.558722 containerd[1465]: time="2026-03-07T01:22:54.558701786Z" level=info msg="StopPodSandbox for \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\"" Mar 7 01:22:54.559768 containerd[1465]: time="2026-03-07T01:22:54.559751016Z" level=info msg="Ensure that sandbox df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22 in task-service has been cleanup successfully" Mar 7 01:22:54.570487 kubelet[2548]: I0307 01:22:54.570465 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:22:54.572945 containerd[1465]: time="2026-03-07T01:22:54.572916633Z" level=info msg="StopPodSandbox for \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\"" Mar 7 01:22:54.574145 containerd[1465]: time="2026-03-07T01:22:54.573886923Z" level=info msg="Ensure that sandbox 808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a in task-service has been cleanup successfully" Mar 7 01:22:54.576441 kubelet[2548]: I0307 01:22:54.576417 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:22:54.577820 containerd[1465]: time="2026-03-07T01:22:54.577742395Z" level=info msg="StopPodSandbox for \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\"" Mar 7 01:22:54.579516 kubelet[2548]: I0307 01:22:54.578921 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:22:54.579572 containerd[1465]: time="2026-03-07T01:22:54.579228936Z" level=info msg="StopPodSandbox for \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\"" Mar 7 01:22:54.579572 containerd[1465]: time="2026-03-07T01:22:54.579339196Z" level=info msg="Ensure that sandbox 3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e in task-service has been cleanup successfully" Mar 7 01:22:54.581081 containerd[1465]: time="2026-03-07T01:22:54.581061197Z" level=info msg="Ensure that sandbox 7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709 in task-service has been cleanup successfully" Mar 7 01:22:54.587919 kubelet[2548]: I0307 01:22:54.587898 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:22:54.589449 containerd[1465]: time="2026-03-07T01:22:54.589072651Z" level=info msg="StopPodSandbox for \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\"" Mar 7 01:22:54.589449 containerd[1465]: time="2026-03-07T01:22:54.589212871Z" level=info msg="Ensure that sandbox aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2 in task-service has been cleanup successfully" Mar 7 01:22:54.595877 kubelet[2548]: I0307 01:22:54.595862 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:22:54.596837 kubelet[2548]: E0307 01:22:54.596819 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:54.610500 containerd[1465]: time="2026-03-07T01:22:54.610470501Z" level=info msg="StopPodSandbox for \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\"" Mar 7 01:22:54.610797 containerd[1465]: time="2026-03-07T01:22:54.610779212Z" level=info msg="Ensure that sandbox 5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14 in task-service has been cleanup successfully" Mar 7 01:22:54.651980 containerd[1465]: time="2026-03-07T01:22:54.651778882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:54.651980 containerd[1465]: time="2026-03-07T01:22:54.651837922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:54.651980 containerd[1465]: time="2026-03-07T01:22:54.651852102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:54.651980 containerd[1465]: time="2026-03-07T01:22:54.651923412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:54.729365 kubelet[2548]: I0307 01:22:54.728883 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-x6qzq" podStartSLOduration=2.990648496 podStartE2EDuration="13.728873121s" podCreationTimestamp="2026-03-07 01:22:41 +0000 UTC" firstStartedPulling="2026-03-07 01:22:42.014471229 +0000 UTC m=+18.534442662" lastFinishedPulling="2026-03-07 01:22:53.543416088 +0000 UTC m=+29.272667287" observedRunningTime="2026-03-07 01:22:54.583622968 +0000 UTC m=+30.312874167" watchObservedRunningTime="2026-03-07 01:22:54.728873121 +0000 UTC m=+30.458124320" Mar 7 01:22:54.772518 systemd[1]: Started cri-containerd-275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4.scope - libcontainer container 275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4. Mar 7 01:22:54.938140 containerd[1465]: time="2026-03-07T01:22:54.938017425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-69l94,Uid:3751f11e-67af-41ad-8416-aabd3cc9da2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4\"" Mar 7 01:22:54.949035 containerd[1465]: time="2026-03-07T01:22:54.948873351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.791 [INFO][3789] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.792 [INFO][3789] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" iface="eth0" netns="/var/run/netns/cni-496d67ba-2cb8-6b82-bda4-18f1ae17be67" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.795 [INFO][3789] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" iface="eth0" netns="/var/run/netns/cni-496d67ba-2cb8-6b82-bda4-18f1ae17be67" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.796 [INFO][3789] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" iface="eth0" netns="/var/run/netns/cni-496d67ba-2cb8-6b82-bda4-18f1ae17be67" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.796 [INFO][3789] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.796 [INFO][3789] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.917 [INFO][3834] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.919 [INFO][3834] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.919 [INFO][3834] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.930 [WARNING][3834] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.930 [INFO][3834] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.936 [INFO][3834] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:54.951681 containerd[1465]: 2026-03-07 01:22:54.942 [INFO][3789] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:22:54.952932 containerd[1465]: time="2026-03-07T01:22:54.952861693Z" level=info msg="TearDown network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\" successfully" Mar 7 01:22:54.952932 containerd[1465]: time="2026-03-07T01:22:54.952883503Z" level=info msg="StopPodSandbox for \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\" returns successfully" Mar 7 01:22:54.956770 kubelet[2548]: E0307 01:22:54.956190 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:54.957273 containerd[1465]: time="2026-03-07T01:22:54.957168735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zdzmq,Uid:cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0,Namespace:kube-system,Attempt:1,}" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.715 [INFO][3700] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.716 [INFO][3700] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" iface="eth0" netns="/var/run/netns/cni-d66f06f6-5492-7fb7-2215-4b9ff02bfcfe" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.719 [INFO][3700] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" iface="eth0" netns="/var/run/netns/cni-d66f06f6-5492-7fb7-2215-4b9ff02bfcfe" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.719 [INFO][3700] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" iface="eth0" netns="/var/run/netns/cni-d66f06f6-5492-7fb7-2215-4b9ff02bfcfe" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.721 [INFO][3700] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.721 [INFO][3700] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.920 [INFO][3817] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.922 [INFO][3817] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.938 [INFO][3817] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.950 [WARNING][3817] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.950 [INFO][3817] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.953 [INFO][3817] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:54.965073 containerd[1465]: 2026-03-07 01:22:54.959 [INFO][3700] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:22:54.965722 containerd[1465]: time="2026-03-07T01:22:54.965408379Z" level=info msg="TearDown network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\" successfully" Mar 7 01:22:54.965722 containerd[1465]: time="2026-03-07T01:22:54.965429489Z" level=info msg="StopPodSandbox for \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\" returns successfully" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.696 [INFO][3740] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.696 [INFO][3740] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" iface="eth0" netns="/var/run/netns/cni-24863018-2907-bcf6-074a-c3cb60590b07" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.697 [INFO][3740] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" iface="eth0" netns="/var/run/netns/cni-24863018-2907-bcf6-074a-c3cb60590b07" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.697 [INFO][3740] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" iface="eth0" netns="/var/run/netns/cni-24863018-2907-bcf6-074a-c3cb60590b07" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.698 [INFO][3740] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.698 [INFO][3740] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.935 [INFO][3811] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.935 [INFO][3811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.953 [INFO][3811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.969 [WARNING][3811] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.970 [INFO][3811] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.972 [INFO][3811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:54.996695 containerd[1465]: 2026-03-07 01:22:54.984 [INFO][3740] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:22:54.997190 containerd[1465]: time="2026-03-07T01:22:54.996839345Z" level=info msg="TearDown network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\" successfully" Mar 7 01:22:54.997190 containerd[1465]: time="2026-03-07T01:22:54.996872895Z" level=info msg="StopPodSandbox for \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\" returns successfully" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.817 [INFO][3735] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.817 [INFO][3735] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" iface="eth0" netns="/var/run/netns/cni-61e48cf8-f41e-6fc7-7625-fa8372f3e0eb" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.818 [INFO][3735] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" iface="eth0" netns="/var/run/netns/cni-61e48cf8-f41e-6fc7-7625-fa8372f3e0eb" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.819 [INFO][3735] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" iface="eth0" netns="/var/run/netns/cni-61e48cf8-f41e-6fc7-7625-fa8372f3e0eb" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.819 [INFO][3735] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.819 [INFO][3735] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.958 [INFO][3842] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.963 [INFO][3842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.972 [INFO][3842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.985 [WARNING][3842] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.985 [INFO][3842] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.987 [INFO][3842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:54.998766 containerd[1465]: 2026-03-07 01:22:54.991 [INFO][3735] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:22:54.999880 containerd[1465]: time="2026-03-07T01:22:54.998893686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599474f6f5-25hl4,Uid:4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de,Namespace:calico-system,Attempt:1,}" Mar 7 01:22:54.999880 containerd[1465]: time="2026-03-07T01:22:54.999553526Z" level=info msg="TearDown network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\" successfully" Mar 7 01:22:54.999880 containerd[1465]: time="2026-03-07T01:22:54.999569356Z" level=info msg="StopPodSandbox for \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\" returns successfully" Mar 7 01:22:55.001047 containerd[1465]: time="2026-03-07T01:22:55.001026567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-7qq69,Uid:7aac9be5-f2a8-4947-a1f7-e67a1f82abd2,Namespace:calico-system,Attempt:1,}" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:54.842 [INFO][3763] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:54.843 [INFO][3763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" iface="eth0" netns="/var/run/netns/cni-bb72b698-33e1-2e39-04fb-26a2b8ad1940" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:54.843 [INFO][3763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" iface="eth0" netns="/var/run/netns/cni-bb72b698-33e1-2e39-04fb-26a2b8ad1940" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:54.844 [INFO][3763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" iface="eth0" netns="/var/run/netns/cni-bb72b698-33e1-2e39-04fb-26a2b8ad1940" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:54.844 [INFO][3763] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:54.844 [INFO][3763] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:55.002 [INFO][3852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:55.003 [INFO][3852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:55.003 [INFO][3852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:55.016 [WARNING][3852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:55.016 [INFO][3852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:55.018 [INFO][3852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:55.045943 containerd[1465]: 2026-03-07 01:22:55.031 [INFO][3763] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:22:55.048224 containerd[1465]: time="2026-03-07T01:22:55.046579379Z" level=info msg="TearDown network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\" successfully" Mar 7 01:22:55.048224 containerd[1465]: time="2026-03-07T01:22:55.046605069Z" level=info msg="StopPodSandbox for \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\" returns successfully" Mar 7 01:22:55.050493 containerd[1465]: time="2026-03-07T01:22:55.050443391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-6q7rc,Uid:15881307-f8ce-4e05-8cbd-e62d67c74c8e,Namespace:calico-system,Attempt:1,}" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:54.911 [INFO][3767] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:54.912 [INFO][3767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" iface="eth0" netns="/var/run/netns/cni-cb90ecd0-0add-a4ea-94e4-721ec5fc49f3" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:54.912 [INFO][3767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" iface="eth0" netns="/var/run/netns/cni-cb90ecd0-0add-a4ea-94e4-721ec5fc49f3" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:54.912 [INFO][3767] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" iface="eth0" netns="/var/run/netns/cni-cb90ecd0-0add-a4ea-94e4-721ec5fc49f3" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:54.912 [INFO][3767] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:54.912 [INFO][3767] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:55.022 [INFO][3872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:55.022 [INFO][3872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:55.023 [INFO][3872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:55.033 [WARNING][3872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:55.033 [INFO][3872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:55.035 [INFO][3872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:55.052788 containerd[1465]: 2026-03-07 01:22:55.038 [INFO][3767] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:22:55.054012 containerd[1465]: time="2026-03-07T01:22:55.053868793Z" level=info msg="TearDown network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\" successfully" Mar 7 01:22:55.054012 containerd[1465]: time="2026-03-07T01:22:55.053889243Z" level=info msg="StopPodSandbox for \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\" returns successfully" Mar 7 01:22:55.055747 containerd[1465]: time="2026-03-07T01:22:55.055726774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-p4brr,Uid:a4e013e7-4a81-4a32-a9dd-da65e551cd48,Namespace:calico-system,Attempt:1,}" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:54.866 [INFO][3786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:54.866 [INFO][3786] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" iface="eth0" netns="/var/run/netns/cni-9b5a515a-c815-456e-2d7d-58a5c584f3cc" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:54.867 [INFO][3786] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" iface="eth0" netns="/var/run/netns/cni-9b5a515a-c815-456e-2d7d-58a5c584f3cc" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:54.869 [INFO][3786] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" iface="eth0" netns="/var/run/netns/cni-9b5a515a-c815-456e-2d7d-58a5c584f3cc" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:54.869 [INFO][3786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:54.869 [INFO][3786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:55.036 [INFO][3859] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:55.037 [INFO][3859] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:55.037 [INFO][3859] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:55.052 [WARNING][3859] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:55.052 [INFO][3859] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:55.053 [INFO][3859] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:55.070801 containerd[1465]: 2026-03-07 01:22:55.063 [INFO][3786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:22:55.071834 containerd[1465]: time="2026-03-07T01:22:55.070905422Z" level=info msg="TearDown network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\" successfully" Mar 7 01:22:55.071834 containerd[1465]: time="2026-03-07T01:22:55.070926332Z" level=info msg="StopPodSandbox for \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\" returns successfully" Mar 7 01:22:55.074206 kubelet[2548]: E0307 01:22:55.073546 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:55.076909 containerd[1465]: time="2026-03-07T01:22:55.076887145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-db7dz,Uid:52fafc1f-6106-4d15-bfa6-45d2f9cd684f,Namespace:kube-system,Attempt:1,}" Mar 7 01:22:55.106547 kubelet[2548]: I0307 01:22:55.105953 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-kube-api-access-279mr\" (UniqueName: \"kubernetes.io/projected/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-kube-api-access-279mr\") pod \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " Mar 7 01:22:55.106682 kubelet[2548]: I0307 01:22:55.106568 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-backend-key-pair\") pod \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " Mar 7 01:22:55.106682 kubelet[2548]: I0307 01:22:55.106592 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-ca-bundle\") pod \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " Mar 7 01:22:55.106682 kubelet[2548]: I0307 01:22:55.106613 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-nginx-config\" (UniqueName: \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-nginx-config\") pod \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\" (UID: \"caf9c03d-2067-4e54-a6ed-0d88a6e481b4\") " Mar 7 01:22:55.117425 kubelet[2548]: I0307 01:22:55.117088 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-ca-bundle" pod "caf9c03d-2067-4e54-a6ed-0d88a6e481b4" (UID: "caf9c03d-2067-4e54-a6ed-0d88a6e481b4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:22:55.117425 kubelet[2548]: I0307 01:22:55.117300 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-nginx-config" pod "caf9c03d-2067-4e54-a6ed-0d88a6e481b4" (UID: "caf9c03d-2067-4e54-a6ed-0d88a6e481b4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:22:55.120047 kubelet[2548]: I0307 01:22:55.119823 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-kube-api-access-279mr" pod "caf9c03d-2067-4e54-a6ed-0d88a6e481b4" (UID: "caf9c03d-2067-4e54-a6ed-0d88a6e481b4"). InnerVolumeSpecName "kube-api-access-279mr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:22:55.123600 kubelet[2548]: I0307 01:22:55.123560 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-backend-key-pair" pod "caf9c03d-2067-4e54-a6ed-0d88a6e481b4" (UID: "caf9c03d-2067-4e54-a6ed-0d88a6e481b4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:22:55.212491 kubelet[2548]: I0307 01:22:55.212369 2548 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-backend-key-pair\") on node \"172-232-28-122\" DevicePath \"\"" Mar 7 01:22:55.212491 kubelet[2548]: I0307 01:22:55.212397 2548 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-whisker-ca-bundle\") on node \"172-232-28-122\" DevicePath \"\"" Mar 7 01:22:55.212491 kubelet[2548]: I0307 01:22:55.212408 2548 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-nginx-config\") on node \"172-232-28-122\" DevicePath \"\"" Mar 7 01:22:55.212491 kubelet[2548]: I0307 01:22:55.212415 2548 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-279mr\" (UniqueName: \"kubernetes.io/projected/caf9c03d-2067-4e54-a6ed-0d88a6e481b4-kube-api-access-279mr\") on node \"172-232-28-122\" DevicePath \"\"" Mar 7 01:22:55.356746 systemd[1]: run-containerd-runc-k8s.io-275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4-runc.Ilw72U.mount: Deactivated successfully. Mar 7 01:22:55.356857 systemd[1]: run-netns-cni\x2dd66f06f6\x2d5492\x2d7fb7\x2d2215\x2d4b9ff02bfcfe.mount: Deactivated successfully. Mar 7 01:22:55.356926 systemd[1]: run-netns-cni\x2d61e48cf8\x2df41e\x2d6fc7\x2d7625\x2dfa8372f3e0eb.mount: Deactivated successfully. Mar 7 01:22:55.356995 systemd[1]: run-netns-cni\x2d24863018\x2d2907\x2dbcf6\x2d074a\x2dc3cb60590b07.mount: Deactivated successfully. Mar 7 01:22:55.357062 systemd[1]: run-netns-cni\x2dcb90ecd0\x2d0add\x2da4ea\x2d94e4\x2d721ec5fc49f3.mount: Deactivated successfully. Mar 7 01:22:55.357126 systemd[1]: run-netns-cni\x2dbb72b698\x2d33e1\x2d2e39\x2d04fb\x2d26a2b8ad1940.mount: Deactivated successfully. Mar 7 01:22:55.357188 systemd[1]: run-netns-cni\x2d9b5a515a\x2dc815\x2d456e\x2d2d7d\x2d58a5c584f3cc.mount: Deactivated successfully. Mar 7 01:22:55.357251 systemd[1]: run-netns-cni\x2d496d67ba\x2d2cb8\x2d6b82\x2dbda4\x2d18f1ae17be67.mount: Deactivated successfully. Mar 7 01:22:55.357315 systemd[1]: var-lib-kubelet-pods-caf9c03d\x2d2067\x2d4e54\x2da6ed\x2d0d88a6e481b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d279mr.mount: Deactivated successfully. Mar 7 01:22:55.357385 systemd[1]: var-lib-kubelet-pods-caf9c03d\x2d2067\x2d4e54\x2da6ed\x2d0d88a6e481b4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:22:55.460509 systemd-networkd[1370]: caliecbac781538: Link UP Mar 7 01:22:55.461970 systemd-networkd[1370]: caliecbac781538: Gained carrier Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.252 [ERROR][3954] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.294 [INFO][3954] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0 coredns-7d764666f9- kube-system 52fafc1f-6106-4d15-bfa6-45d2f9cd684f 903 0 2026-03-07 01:22:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-28-122 coredns-7d764666f9-db7dz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliecbac781538 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.294 [INFO][3954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.394 [INFO][4051] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" HandleID="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.410 [INFO][4051] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" HandleID="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc140), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-28-122", "pod":"coredns-7d764666f9-db7dz", "timestamp":"2026-03-07 01:22:55.394646933 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b7080)} Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.410 [INFO][4051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.410 [INFO][4051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.410 [INFO][4051] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.413 [INFO][4051] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.419 [INFO][4051] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.424 [INFO][4051] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.429 [INFO][4051] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.433 [INFO][4051] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.433 [INFO][4051] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.436 [INFO][4051] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488 Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.442 [INFO][4051] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.448 [INFO][4051] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.2/26] block=192.168.75.0/26 handle="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.448 [INFO][4051] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.2/26] handle="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" host="172-232-28-122" Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.448 [INFO][4051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:55.483871 containerd[1465]: 2026-03-07 01:22:55.448 [INFO][4051] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.2/26] IPv6=[] ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" HandleID="k8s-pod-network.4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.485188 containerd[1465]: 2026-03-07 01:22:55.455 [INFO][3954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"52fafc1f-6106-4d15-bfa6-45d2f9cd684f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"coredns-7d764666f9-db7dz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecbac781538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.485188 containerd[1465]: 2026-03-07 01:22:55.455 [INFO][3954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.2/32] ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.485188 containerd[1465]: 2026-03-07 01:22:55.455 [INFO][3954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecbac781538 ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.485188 containerd[1465]: 2026-03-07 01:22:55.464 [INFO][3954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.485188 containerd[1465]: 2026-03-07 01:22:55.465 [INFO][3954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"52fafc1f-6106-4d15-bfa6-45d2f9cd684f", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488", Pod:"coredns-7d764666f9-db7dz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecbac781538", MAC:"5a:5b:05:61:a9:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.485188 containerd[1465]: 2026-03-07 01:22:55.476 [INFO][3954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488" Namespace="kube-system" Pod="coredns-7d764666f9-db7dz" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:22:55.536455 containerd[1465]: time="2026-03-07T01:22:55.535763644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:55.536455 containerd[1465]: time="2026-03-07T01:22:55.535822374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:55.536455 containerd[1465]: time="2026-03-07T01:22:55.535833514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:55.536455 containerd[1465]: time="2026-03-07T01:22:55.535921224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:55.589718 systemd[1]: Started cri-containerd-4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488.scope - libcontainer container 4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488. Mar 7 01:22:55.600834 systemd-networkd[1370]: cali70d9e9a01c7: Link UP Mar 7 01:22:55.602464 systemd-networkd[1370]: cali70d9e9a01c7: Gained carrier Mar 7 01:22:55.614028 kubelet[2548]: I0307 01:22:55.611923 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:22:55.630094 systemd[1]: Removed slice kubepods-besteffort-podcaf9c03d_2067_4e54_a6ed_0d88a6e481b4.slice - libcontainer container kubepods-besteffort-podcaf9c03d_2067_4e54_a6ed_0d88a6e481b4.slice. Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.134 [ERROR][3895] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.176 [INFO][3895] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0 goldmane-9f7667bb8- calico-system 7aac9be5-f2a8-4947-a1f7-e67a1f82abd2 901 0 2026-03-07 01:22:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-28-122 goldmane-9f7667bb8-7qq69 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali70d9e9a01c7 [] [] }} ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.177 [INFO][3895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.417 [INFO][3997] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" HandleID="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.436 [INFO][3997] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" HandleID="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e900), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-28-122", "pod":"goldmane-9f7667bb8-7qq69", "timestamp":"2026-03-07 01:22:55.416997694 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000451b80)} Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.436 [INFO][3997] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.449 [INFO][3997] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.449 [INFO][3997] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.519 [INFO][3997] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.527 [INFO][3997] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.533 [INFO][3997] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.539 [INFO][3997] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.542 [INFO][3997] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.542 [INFO][3997] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.546 [INFO][3997] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.570 [INFO][3997] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.580 [INFO][3997] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.3/26] block=192.168.75.0/26 handle="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.580 [INFO][3997] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.3/26] handle="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" host="172-232-28-122" Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.580 [INFO][3997] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:55.643855 containerd[1465]: 2026-03-07 01:22:55.580 [INFO][3997] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.3/26] IPv6=[] ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" HandleID="k8s-pod-network.c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:55.644702 containerd[1465]: 2026-03-07 01:22:55.585 [INFO][3895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"goldmane-9f7667bb8-7qq69", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70d9e9a01c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.644702 containerd[1465]: 2026-03-07 01:22:55.585 [INFO][3895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.3/32] ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:55.644702 containerd[1465]: 2026-03-07 01:22:55.585 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70d9e9a01c7 ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:55.644702 containerd[1465]: 2026-03-07 01:22:55.603 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:55.644702 containerd[1465]: 2026-03-07 01:22:55.604 [INFO][3895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac", Pod:"goldmane-9f7667bb8-7qq69", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70d9e9a01c7", MAC:"c2:ff:53:be:9e:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.644702 containerd[1465]: 2026-03-07 01:22:55.616 [INFO][3895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac" Namespace="calico-system" Pod="goldmane-9f7667bb8-7qq69" WorkloadEndpoint="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:22:55.705230 containerd[1465]: time="2026-03-07T01:22:55.704717898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:55.705230 containerd[1465]: time="2026-03-07T01:22:55.704781648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:55.705230 containerd[1465]: time="2026-03-07T01:22:55.704803768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:55.705230 containerd[1465]: time="2026-03-07T01:22:55.704900218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:55.736024 systemd[1]: Created slice kubepods-besteffort-pod2db6df4f_11a7_4d86_9f66_cf3da5b22423.slice - libcontainer container kubepods-besteffort-pod2db6df4f_11a7_4d86_9f66_cf3da5b22423.slice. Mar 7 01:22:55.776963 systemd[1]: Started cri-containerd-c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac.scope - libcontainer container c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac. Mar 7 01:22:55.806267 systemd-networkd[1370]: cali8889f1a8f3f: Link UP Mar 7 01:22:55.807309 systemd-networkd[1370]: cali8889f1a8f3f: Gained carrier Mar 7 01:22:55.820695 kubelet[2548]: I0307 01:22:55.819788 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2db6df4f-11a7-4d86-9f66-cf3da5b22423-whisker-ca-bundle\") pod \"whisker-6b4c74794-r8c4h\" (UID: \"2db6df4f-11a7-4d86-9f66-cf3da5b22423\") " pod="calico-system/whisker-6b4c74794-r8c4h" Mar 7 01:22:55.820695 kubelet[2548]: I0307 01:22:55.819821 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s8m6\" (UniqueName: \"kubernetes.io/projected/2db6df4f-11a7-4d86-9f66-cf3da5b22423-kube-api-access-7s8m6\") pod \"whisker-6b4c74794-r8c4h\" (UID: \"2db6df4f-11a7-4d86-9f66-cf3da5b22423\") " pod="calico-system/whisker-6b4c74794-r8c4h" Mar 7 01:22:55.820695 kubelet[2548]: I0307 01:22:55.819837 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2db6df4f-11a7-4d86-9f66-cf3da5b22423-nginx-config\") pod \"whisker-6b4c74794-r8c4h\" (UID: \"2db6df4f-11a7-4d86-9f66-cf3da5b22423\") " pod="calico-system/whisker-6b4c74794-r8c4h" Mar 7 01:22:55.820695 kubelet[2548]: I0307 01:22:55.819855 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2db6df4f-11a7-4d86-9f66-cf3da5b22423-whisker-backend-key-pair\") pod \"whisker-6b4c74794-r8c4h\" (UID: \"2db6df4f-11a7-4d86-9f66-cf3da5b22423\") " pod="calico-system/whisker-6b4c74794-r8c4h" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.076 [ERROR][3882] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.143 [INFO][3882] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0 coredns-7d764666f9- kube-system cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0 900 0 2026-03-07 01:22:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-28-122 coredns-7d764666f9-zdzmq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8889f1a8f3f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.143 [INFO][3882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.414 [INFO][3980] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" HandleID="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.441 [INFO][3980] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" HandleID="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048c4e0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-28-122", "pod":"coredns-7d764666f9-zdzmq", "timestamp":"2026-03-07 01:22:55.414819533 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b0000)} Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.441 [INFO][3980] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.580 [INFO][3980] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.580 [INFO][3980] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.627 [INFO][3980] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.665 [INFO][3980] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.685 [INFO][3980] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.697 [INFO][3980] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.714 [INFO][3980] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.714 [INFO][3980] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.739 [INFO][3980] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10 Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.749 [INFO][3980] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.763 [INFO][3980] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.4/26] block=192.168.75.0/26 handle="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.766 [INFO][3980] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.4/26] handle="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" host="172-232-28-122" Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.766 [INFO][3980] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:55.867391 containerd[1465]: 2026-03-07 01:22:55.766 [INFO][3980] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.4/26] IPv6=[] ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" HandleID="k8s-pod-network.cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:55.867979 containerd[1465]: 2026-03-07 01:22:55.799 [INFO][3882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"coredns-7d764666f9-zdzmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8889f1a8f3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.867979 containerd[1465]: 2026-03-07 01:22:55.800 [INFO][3882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.4/32] ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:55.867979 containerd[1465]: 2026-03-07 01:22:55.801 [INFO][3882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8889f1a8f3f ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:55.867979 containerd[1465]: 2026-03-07 01:22:55.812 [INFO][3882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:55.867979 containerd[1465]: 2026-03-07 01:22:55.813 [INFO][3882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10", Pod:"coredns-7d764666f9-zdzmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8889f1a8f3f", MAC:"5e:10:01:2f:43:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.867979 containerd[1465]: 2026-03-07 01:22:55.845 [INFO][3882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10" Namespace="kube-system" Pod="coredns-7d764666f9-zdzmq" WorkloadEndpoint="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:22:55.873610 containerd[1465]: time="2026-03-07T01:22:55.873294672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-db7dz,Uid:52fafc1f-6106-4d15-bfa6-45d2f9cd684f,Namespace:kube-system,Attempt:1,} returns sandbox id \"4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488\"" Mar 7 01:22:55.875651 kubelet[2548]: E0307 01:22:55.874602 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:55.881569 containerd[1465]: time="2026-03-07T01:22:55.881542437Z" level=info msg="CreateContainer within sandbox \"4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:22:55.908362 systemd-networkd[1370]: cali8916e999724: Link UP Mar 7 01:22:55.908602 systemd-networkd[1370]: cali8916e999724: Gained carrier Mar 7 01:22:55.931655 containerd[1465]: time="2026-03-07T01:22:55.930233611Z" level=info msg="CreateContainer within sandbox \"4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bee6ada5859b13e58b2f6a5f4c00c79b63165ff39ca23f75054204e6eccfe3db\"" Mar 7 01:22:55.943672 containerd[1465]: time="2026-03-07T01:22:55.943020297Z" level=info msg="StartContainer for \"bee6ada5859b13e58b2f6a5f4c00c79b63165ff39ca23f75054204e6eccfe3db\"" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.152 [ERROR][3904] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.200 [INFO][3904] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0 calico-kube-controllers-599474f6f5- calico-system 4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de 898 0 2026-03-07 01:22:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:599474f6f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-28-122 calico-kube-controllers-599474f6f5-25hl4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8916e999724 [] [] }} ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.200 [INFO][3904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.435 [INFO][4007] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" HandleID="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.454 [INFO][4007] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" HandleID="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5e60), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-28-122", "pod":"calico-kube-controllers-599474f6f5-25hl4", "timestamp":"2026-03-07 01:22:55.435991664 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000eb080)} Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.454 [INFO][4007] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.767 [INFO][4007] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.767 [INFO][4007] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.776 [INFO][4007] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.793 [INFO][4007] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.816 [INFO][4007] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.824 [INFO][4007] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.830 [INFO][4007] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.832 [INFO][4007] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.847 [INFO][4007] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792 Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.863 [INFO][4007] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.877 [INFO][4007] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.5/26] block=192.168.75.0/26 handle="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.877 [INFO][4007] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.5/26] handle="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" host="172-232-28-122" Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.878 [INFO][4007] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:55.977712 containerd[1465]: 2026-03-07 01:22:55.878 [INFO][4007] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.5/26] IPv6=[] ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" HandleID="k8s-pod-network.6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:55.980385 containerd[1465]: 2026-03-07 01:22:55.892 [INFO][3904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0", GenerateName:"calico-kube-controllers-599474f6f5-", Namespace:"calico-system", SelfLink:"", UID:"4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599474f6f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"calico-kube-controllers-599474f6f5-25hl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8916e999724", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.980385 containerd[1465]: 2026-03-07 01:22:55.893 [INFO][3904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.5/32] ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:55.980385 containerd[1465]: 2026-03-07 01:22:55.893 [INFO][3904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8916e999724 ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:55.980385 containerd[1465]: 2026-03-07 01:22:55.915 [INFO][3904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:55.980385 containerd[1465]: 2026-03-07 01:22:55.919 [INFO][3904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0", GenerateName:"calico-kube-controllers-599474f6f5-", Namespace:"calico-system", SelfLink:"", UID:"4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599474f6f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792", Pod:"calico-kube-controllers-599474f6f5-25hl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8916e999724", MAC:"16:f8:bd:a1:c6:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:55.980385 containerd[1465]: 2026-03-07 01:22:55.971 [INFO][3904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792" Namespace="calico-system" Pod="calico-kube-controllers-599474f6f5-25hl4" WorkloadEndpoint="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:22:56.027032 systemd[1]: Started cri-containerd-bee6ada5859b13e58b2f6a5f4c00c79b63165ff39ca23f75054204e6eccfe3db.scope - libcontainer container bee6ada5859b13e58b2f6a5f4c00c79b63165ff39ca23f75054204e6eccfe3db. Mar 7 01:22:56.037256 containerd[1465]: time="2026-03-07T01:22:56.026195049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:56.037256 containerd[1465]: time="2026-03-07T01:22:56.026247999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:56.037256 containerd[1465]: time="2026-03-07T01:22:56.026272949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.037256 containerd[1465]: time="2026-03-07T01:22:56.026468389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.037716 containerd[1465]: time="2026-03-07T01:22:56.037671315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-7qq69,Uid:7aac9be5-f2a8-4947-a1f7-e67a1f82abd2,Namespace:calico-system,Attempt:1,} returns sandbox id \"c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac\"" Mar 7 01:22:56.044164 containerd[1465]: time="2026-03-07T01:22:56.044127088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b4c74794-r8c4h,Uid:2db6df4f-11a7-4d86-9f66-cf3da5b22423,Namespace:calico-system,Attempt:0,}" Mar 7 01:22:56.054764 systemd-networkd[1370]: cali4149000f018: Link UP Mar 7 01:22:56.057047 systemd-networkd[1370]: cali4149000f018: Gained carrier Mar 7 01:22:56.111779 systemd[1]: Started cri-containerd-cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10.scope - libcontainer container cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10. Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.195 [ERROR][3919] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.240 [INFO][3919] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0 calico-apiserver-57bff9d745- calico-system 15881307-f8ce-4e05-8cbd-e62d67c74c8e 902 0 2026-03-07 01:22:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57bff9d745 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-28-122 calico-apiserver-57bff9d745-6q7rc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4149000f018 [] [] }} ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.240 [INFO][3919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.481 [INFO][4024] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" HandleID="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.493 [INFO][4024] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" HandleID="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00061c370), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-28-122", "pod":"calico-apiserver-57bff9d745-6q7rc", "timestamp":"2026-03-07 01:22:55.481455147 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000208dc0)} Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.493 [INFO][4024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.881 [INFO][4024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.881 [INFO][4024] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.892 [INFO][4024] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.901 [INFO][4024] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.920 [INFO][4024] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.923 [INFO][4024] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.944 [INFO][4024] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.944 [INFO][4024] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.975 [INFO][4024] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2 Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:55.989 [INFO][4024] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:56.028 [INFO][4024] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.6/26] block=192.168.75.0/26 handle="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:56.028 [INFO][4024] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.6/26] handle="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" host="172-232-28-122" Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:56.028 [INFO][4024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:56.124095 containerd[1465]: 2026-03-07 01:22:56.031 [INFO][4024] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.6/26] IPv6=[] ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" HandleID="k8s-pod-network.cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:56.126045 containerd[1465]: 2026-03-07 01:22:56.047 [INFO][3919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"15881307-f8ce-4e05-8cbd-e62d67c74c8e", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"calico-apiserver-57bff9d745-6q7rc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4149000f018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:56.126045 containerd[1465]: 2026-03-07 01:22:56.048 [INFO][3919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.6/32] ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:56.126045 containerd[1465]: 2026-03-07 01:22:56.048 [INFO][3919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4149000f018 ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:56.126045 containerd[1465]: 2026-03-07 01:22:56.058 [INFO][3919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:56.126045 containerd[1465]: 2026-03-07 01:22:56.059 [INFO][3919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"15881307-f8ce-4e05-8cbd-e62d67c74c8e", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2", Pod:"calico-apiserver-57bff9d745-6q7rc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4149000f018", MAC:"b6:df:96:86:0b:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:56.126045 containerd[1465]: 2026-03-07 01:22:56.110 [INFO][3919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-6q7rc" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:22:56.137938 containerd[1465]: time="2026-03-07T01:22:56.132391002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:56.137938 containerd[1465]: time="2026-03-07T01:22:56.132582262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:56.137938 containerd[1465]: time="2026-03-07T01:22:56.132594102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.137938 containerd[1465]: time="2026-03-07T01:22:56.132687242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.202016 containerd[1465]: time="2026-03-07T01:22:56.201567916Z" level=info msg="StartContainer for \"bee6ada5859b13e58b2f6a5f4c00c79b63165ff39ca23f75054204e6eccfe3db\" returns successfully" Mar 7 01:22:56.207083 systemd-networkd[1370]: cali8267d1a29fc: Link UP Mar 7 01:22:56.208175 systemd-networkd[1370]: cali8267d1a29fc: Gained carrier Mar 7 01:22:56.226765 systemd[1]: Started cri-containerd-6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792.scope - libcontainer container 6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792. Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:55.178 [ERROR][3921] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:55.216 [INFO][3921] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0 calico-apiserver-57bff9d745- calico-system a4e013e7-4a81-4a32-a9dd-da65e551cd48 904 0 2026-03-07 01:22:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57bff9d745 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-28-122 calico-apiserver-57bff9d745-p4brr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8267d1a29fc [] [] }} ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:55.219 [INFO][3921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:55.483 [INFO][4019] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" HandleID="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:55.499 [INFO][4019] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" HandleID="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122370), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-28-122", "pod":"calico-apiserver-57bff9d745-p4brr", "timestamp":"2026-03-07 01:22:55.483414358 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:55.499 [INFO][4019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.028 [INFO][4019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.028 [INFO][4019] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.033 [INFO][4019] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.061 [INFO][4019] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.114 [INFO][4019] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.134 [INFO][4019] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.142 [INFO][4019] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.142 [INFO][4019] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.146 [INFO][4019] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70 Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.155 [INFO][4019] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.176 [INFO][4019] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.7/26] block=192.168.75.0/26 handle="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.176 [INFO][4019] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.7/26] handle="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" host="172-232-28-122" Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.176 [INFO][4019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:56.250794 containerd[1465]: 2026-03-07 01:22:56.176 [INFO][4019] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.7/26] IPv6=[] ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" HandleID="k8s-pod-network.973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:56.251267 containerd[1465]: 2026-03-07 01:22:56.192 [INFO][3921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"a4e013e7-4a81-4a32-a9dd-da65e551cd48", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"calico-apiserver-57bff9d745-p4brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8267d1a29fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:56.251267 containerd[1465]: 2026-03-07 01:22:56.193 [INFO][3921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.7/32] ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:56.251267 containerd[1465]: 2026-03-07 01:22:56.193 [INFO][3921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8267d1a29fc ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:56.251267 containerd[1465]: 2026-03-07 01:22:56.209 [INFO][3921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:56.251267 containerd[1465]: 2026-03-07 01:22:56.211 [INFO][3921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"a4e013e7-4a81-4a32-a9dd-da65e551cd48", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70", Pod:"calico-apiserver-57bff9d745-p4brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8267d1a29fc", MAC:"5a:32:fd:a2:d4:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:56.251267 containerd[1465]: 2026-03-07 01:22:56.241 [INFO][3921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70" Namespace="calico-system" Pod="calico-apiserver-57bff9d745-p4brr" WorkloadEndpoint="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:22:56.276125 systemd-networkd[1370]: cali2ed9a10caaa: Gained IPv6LL Mar 7 01:22:56.289172 containerd[1465]: time="2026-03-07T01:22:56.283617407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:56.289172 containerd[1465]: time="2026-03-07T01:22:56.283707668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:56.289172 containerd[1465]: time="2026-03-07T01:22:56.283721378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.289172 containerd[1465]: time="2026-03-07T01:22:56.283806178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.363112 containerd[1465]: time="2026-03-07T01:22:56.359473465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:56.363112 containerd[1465]: time="2026-03-07T01:22:56.359542895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:56.363112 containerd[1465]: time="2026-03-07T01:22:56.359554315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.363112 containerd[1465]: time="2026-03-07T01:22:56.359667175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.377787 systemd[1]: Started cri-containerd-cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2.scope - libcontainer container cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2. Mar 7 01:22:56.408733 kubelet[2548]: I0307 01:22:56.408337 2548 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="caf9c03d-2067-4e54-a6ed-0d88a6e481b4" path="/var/lib/kubelet/pods/caf9c03d-2067-4e54-a6ed-0d88a6e481b4/volumes" Mar 7 01:22:56.410260 containerd[1465]: time="2026-03-07T01:22:56.410234521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zdzmq,Uid:cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10\"" Mar 7 01:22:56.411522 kubelet[2548]: E0307 01:22:56.411504 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:56.420205 containerd[1465]: time="2026-03-07T01:22:56.420184656Z" level=info msg="CreateContainer within sandbox \"cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:22:56.451219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966350461.mount: Deactivated successfully. Mar 7 01:22:56.466881 containerd[1465]: time="2026-03-07T01:22:56.466842849Z" level=info msg="CreateContainer within sandbox \"cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb2331d4b72a7d73dc1c4d79388853b3ef5121b204c2987c9d27a6bb98ab0a52\"" Mar 7 01:22:56.468943 containerd[1465]: time="2026-03-07T01:22:56.468780150Z" level=info msg="StartContainer for \"cb2331d4b72a7d73dc1c4d79388853b3ef5121b204c2987c9d27a6bb98ab0a52\"" Mar 7 01:22:56.485768 systemd[1]: Started cri-containerd-973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70.scope - libcontainer container 973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70. Mar 7 01:22:56.537785 systemd[1]: Started cri-containerd-cb2331d4b72a7d73dc1c4d79388853b3ef5121b204c2987c9d27a6bb98ab0a52.scope - libcontainer container cb2331d4b72a7d73dc1c4d79388853b3ef5121b204c2987c9d27a6bb98ab0a52. Mar 7 01:22:56.565329 systemd-networkd[1370]: cali38deab1a2f8: Link UP Mar 7 01:22:56.568864 systemd-networkd[1370]: cali38deab1a2f8: Gained carrier Mar 7 01:22:56.596424 systemd-networkd[1370]: caliecbac781538: Gained IPv6LL Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.260 [ERROR][4300] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.321 [INFO][4300] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0 whisker-6b4c74794- calico-system 2db6df4f-11a7-4d86-9f66-cf3da5b22423 929 0 2026-03-07 01:22:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b4c74794 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-28-122 whisker-6b4c74794-r8c4h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali38deab1a2f8 [] [] }} ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.323 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.475 [INFO][4383] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" HandleID="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Workload="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.493 [INFO][4383] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" HandleID="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Workload="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f870), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-28-122", "pod":"whisker-6b4c74794-r8c4h", "timestamp":"2026-03-07 01:22:56.475982494 +0000 UTC"}, Hostname:"172-232-28-122", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188f20)} Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.493 [INFO][4383] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.493 [INFO][4383] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.493 [INFO][4383] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-28-122' Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.498 [INFO][4383] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.504 [INFO][4383] ipam/ipam.go 409: Looking up existing affinities for host host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.512 [INFO][4383] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.516 [INFO][4383] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.519 [INFO][4383] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.520 [INFO][4383] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.522 [INFO][4383] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100 Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.528 [INFO][4383] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.538 [INFO][4383] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.8/26] block=192.168.75.0/26 handle="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.538 [INFO][4383] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.8/26] handle="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" host="172-232-28-122" Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.539 [INFO][4383] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:56.599111 containerd[1465]: 2026-03-07 01:22:56.539 [INFO][4383] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.8/26] IPv6=[] ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" HandleID="k8s-pod-network.166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Workload="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" Mar 7 01:22:56.599958 containerd[1465]: 2026-03-07 01:22:56.559 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0", GenerateName:"whisker-6b4c74794-", Namespace:"calico-system", SelfLink:"", UID:"2db6df4f-11a7-4d86-9f66-cf3da5b22423", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b4c74794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"", Pod:"whisker-6b4c74794-r8c4h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38deab1a2f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:56.599958 containerd[1465]: 2026-03-07 01:22:56.559 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.8/32] ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" Mar 7 01:22:56.599958 containerd[1465]: 2026-03-07 01:22:56.559 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38deab1a2f8 ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" Mar 7 01:22:56.599958 containerd[1465]: 2026-03-07 01:22:56.574 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" Mar 7 01:22:56.599958 containerd[1465]: 2026-03-07 01:22:56.578 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0", GenerateName:"whisker-6b4c74794-", Namespace:"calico-system", SelfLink:"", UID:"2db6df4f-11a7-4d86-9f66-cf3da5b22423", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b4c74794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100", Pod:"whisker-6b4c74794-r8c4h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38deab1a2f8", MAC:"12:0b:7c:3b:37:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:22:56.599958 containerd[1465]: 2026-03-07 01:22:56.590 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100" Namespace="calico-system" Pod="whisker-6b4c74794-r8c4h" WorkloadEndpoint="172--232--28--122-k8s-whisker--6b4c74794--r8c4h-eth0" Mar 7 01:22:56.608895 containerd[1465]: time="2026-03-07T01:22:56.608871820Z" level=info msg="StartContainer for \"cb2331d4b72a7d73dc1c4d79388853b3ef5121b204c2987c9d27a6bb98ab0a52\" returns successfully" Mar 7 01:22:56.621989 kubelet[2548]: E0307 01:22:56.621406 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:56.632291 kubelet[2548]: E0307 01:22:56.629942 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:56.678170 containerd[1465]: time="2026-03-07T01:22:56.678091355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:22:56.678170 containerd[1465]: time="2026-03-07T01:22:56.678146275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:22:56.681323 containerd[1465]: time="2026-03-07T01:22:56.680887276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.681514 containerd[1465]: time="2026-03-07T01:22:56.681402546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:22:56.723827 kubelet[2548]: I0307 01:22:56.723780 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-db7dz" podStartSLOduration=26.723767336999998 podStartE2EDuration="26.723767337s" podCreationTimestamp="2026-03-07 01:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:22:56.675701453 +0000 UTC m=+32.404952672" watchObservedRunningTime="2026-03-07 01:22:56.723767337 +0000 UTC m=+32.453018536" Mar 7 01:22:56.725618 kubelet[2548]: I0307 01:22:56.724560 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-zdzmq" podStartSLOduration=26.724539728 podStartE2EDuration="26.724539728s" podCreationTimestamp="2026-03-07 01:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:22:56.720834946 +0000 UTC m=+32.450086145" watchObservedRunningTime="2026-03-07 01:22:56.724539728 +0000 UTC m=+32.453790927" Mar 7 01:22:56.743217 systemd[1]: Started cri-containerd-166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100.scope - libcontainer container 166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100. Mar 7 01:22:56.752699 containerd[1465]: time="2026-03-07T01:22:56.750944131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:56.752699 containerd[1465]: time="2026-03-07T01:22:56.752411052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:22:56.754045 containerd[1465]: time="2026-03-07T01:22:56.753946392Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:56.757016 containerd[1465]: time="2026-03-07T01:22:56.756986824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:56.758513 containerd[1465]: time="2026-03-07T01:22:56.758481285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.809576104s" Mar 7 01:22:56.758571 containerd[1465]: time="2026-03-07T01:22:56.758509335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:22:56.759447 containerd[1465]: time="2026-03-07T01:22:56.759366675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:22:56.764208 containerd[1465]: time="2026-03-07T01:22:56.764167518Z" level=info msg="CreateContainer within sandbox \"275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:22:56.778316 containerd[1465]: time="2026-03-07T01:22:56.778259875Z" level=info msg="CreateContainer within sandbox \"275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d5b04e9d1d4db404eac3a40d8890c8544ab621cc3c419287da9f02b8305f0dc5\"" Mar 7 01:22:56.779762 containerd[1465]: time="2026-03-07T01:22:56.779593875Z" level=info msg="StartContainer for \"d5b04e9d1d4db404eac3a40d8890c8544ab621cc3c419287da9f02b8305f0dc5\"" Mar 7 01:22:56.812974 kubelet[2548]: I0307 01:22:56.811899 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:22:56.901509 containerd[1465]: time="2026-03-07T01:22:56.901393726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-6q7rc,Uid:15881307-f8ce-4e05-8cbd-e62d67c74c8e,Namespace:calico-system,Attempt:1,} returns sandbox id \"cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2\"" Mar 7 01:22:56.902764 systemd[1]: Started cri-containerd-d5b04e9d1d4db404eac3a40d8890c8544ab621cc3c419287da9f02b8305f0dc5.scope - libcontainer container d5b04e9d1d4db404eac3a40d8890c8544ab621cc3c419287da9f02b8305f0dc5. Mar 7 01:22:56.912896 containerd[1465]: time="2026-03-07T01:22:56.911945021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-599474f6f5-25hl4,Uid:4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792\"" Mar 7 01:22:56.981217 containerd[1465]: time="2026-03-07T01:22:56.981184136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b4c74794-r8c4h,Uid:2db6df4f-11a7-4d86-9f66-cf3da5b22423,Namespace:calico-system,Attempt:0,} returns sandbox id \"166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100\"" Mar 7 01:22:57.008307 containerd[1465]: time="2026-03-07T01:22:57.008280740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57bff9d745-p4brr,Uid:a4e013e7-4a81-4a32-a9dd-da65e551cd48,Namespace:calico-system,Attempt:1,} returns sandbox id \"973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70\"" Mar 7 01:22:57.058911 containerd[1465]: time="2026-03-07T01:22:57.058881515Z" level=info msg="StartContainer for \"d5b04e9d1d4db404eac3a40d8890c8544ab621cc3c419287da9f02b8305f0dc5\" returns successfully" Mar 7 01:22:57.094662 kernel: calico-node[4011]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:22:57.363754 systemd-networkd[1370]: cali8267d1a29fc: Gained IPv6LL Mar 7 01:22:57.556474 systemd-networkd[1370]: cali4149000f018: Gained IPv6LL Mar 7 01:22:57.557745 systemd-networkd[1370]: cali8916e999724: Gained IPv6LL Mar 7 01:22:57.619787 systemd-networkd[1370]: cali70d9e9a01c7: Gained IPv6LL Mar 7 01:22:57.640797 kubelet[2548]: E0307 01:22:57.640773 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:57.641672 kubelet[2548]: E0307 01:22:57.641266 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:57.683775 systemd-networkd[1370]: cali8889f1a8f3f: Gained IPv6LL Mar 7 01:22:57.757403 systemd-networkd[1370]: vxlan.calico: Link UP Mar 7 01:22:57.757411 systemd-networkd[1370]: vxlan.calico: Gained carrier Mar 7 01:22:58.538031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85105800.mount: Deactivated successfully. Mar 7 01:22:58.582370 systemd-networkd[1370]: cali38deab1a2f8: Gained IPv6LL Mar 7 01:22:58.643240 kubelet[2548]: E0307 01:22:58.643204 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:58.644506 kubelet[2548]: E0307 01:22:58.644435 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:58.938567 containerd[1465]: time="2026-03-07T01:22:58.938498114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:22:58.939187 containerd[1465]: time="2026-03-07T01:22:58.938658614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:58.941656 containerd[1465]: time="2026-03-07T01:22:58.940155105Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:58.944677 containerd[1465]: time="2026-03-07T01:22:58.944650657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:22:58.946459 containerd[1465]: time="2026-03-07T01:22:58.946434408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.187032803s" Mar 7 01:22:58.946513 containerd[1465]: time="2026-03-07T01:22:58.946460108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:22:58.947464 containerd[1465]: time="2026-03-07T01:22:58.947352728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:22:58.950545 containerd[1465]: time="2026-03-07T01:22:58.950331320Z" level=info msg="CreateContainer within sandbox \"c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:22:58.973994 containerd[1465]: time="2026-03-07T01:22:58.973969272Z" level=info msg="CreateContainer within sandbox \"c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c\"" Mar 7 01:22:58.975399 containerd[1465]: time="2026-03-07T01:22:58.975376592Z" level=info msg="StartContainer for \"13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c\"" Mar 7 01:22:59.032985 systemd[1]: run-containerd-runc-k8s.io-13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c-runc.LGFrC4.mount: Deactivated successfully. Mar 7 01:22:59.041772 systemd[1]: Started cri-containerd-13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c.scope - libcontainer container 13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c. Mar 7 01:22:59.089819 containerd[1465]: time="2026-03-07T01:22:59.089787550Z" level=info msg="StartContainer for \"13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c\" returns successfully" Mar 7 01:22:59.347958 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Mar 7 01:22:59.650567 kubelet[2548]: E0307 01:22:59.650519 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:22:59.668936 kubelet[2548]: I0307 01:22:59.668738 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-7qq69" podStartSLOduration=15.76860454 podStartE2EDuration="18.668726469s" podCreationTimestamp="2026-03-07 01:22:41 +0000 UTC" firstStartedPulling="2026-03-07 01:22:56.047149129 +0000 UTC m=+31.776400328" lastFinishedPulling="2026-03-07 01:22:58.947271058 +0000 UTC m=+34.676522257" observedRunningTime="2026-03-07 01:22:59.668515759 +0000 UTC m=+35.397766968" watchObservedRunningTime="2026-03-07 01:22:59.668726469 +0000 UTC m=+35.397977668" Mar 7 01:23:00.502747 containerd[1465]: time="2026-03-07T01:23:00.502594166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:00.504130 containerd[1465]: time="2026-03-07T01:23:00.503481756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:23:00.504130 containerd[1465]: time="2026-03-07T01:23:00.504079406Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:00.506709 containerd[1465]: time="2026-03-07T01:23:00.506058807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:00.507548 containerd[1465]: time="2026-03-07T01:23:00.506873558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.55949588s" Mar 7 01:23:00.507548 containerd[1465]: time="2026-03-07T01:23:00.506900218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:23:00.508352 containerd[1465]: time="2026-03-07T01:23:00.508027888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:23:00.512395 containerd[1465]: time="2026-03-07T01:23:00.512365970Z" level=info msg="CreateContainer within sandbox \"cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:23:00.530049 containerd[1465]: time="2026-03-07T01:23:00.530017979Z" level=info msg="CreateContainer within sandbox \"cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"17e832c18c2276b49e77b2a10a623922b09864f6963e43c291a4aceda29fe1d6\"" Mar 7 01:23:00.530564 containerd[1465]: time="2026-03-07T01:23:00.530522149Z" level=info msg="StartContainer for \"17e832c18c2276b49e77b2a10a623922b09864f6963e43c291a4aceda29fe1d6\"" Mar 7 01:23:00.568780 systemd[1]: Started cri-containerd-17e832c18c2276b49e77b2a10a623922b09864f6963e43c291a4aceda29fe1d6.scope - libcontainer container 17e832c18c2276b49e77b2a10a623922b09864f6963e43c291a4aceda29fe1d6. Mar 7 01:23:00.612833 containerd[1465]: time="2026-03-07T01:23:00.612790591Z" level=info msg="StartContainer for \"17e832c18c2276b49e77b2a10a623922b09864f6963e43c291a4aceda29fe1d6\" returns successfully" Mar 7 01:23:00.662834 kubelet[2548]: I0307 01:23:00.662811 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:23:01.664961 kubelet[2548]: I0307 01:23:01.664931 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:23:03.228448 containerd[1465]: time="2026-03-07T01:23:03.228406207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:03.229469 containerd[1465]: time="2026-03-07T01:23:03.229343538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:23:03.230238 containerd[1465]: time="2026-03-07T01:23:03.229988598Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:03.231971 containerd[1465]: time="2026-03-07T01:23:03.231947599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:03.232770 containerd[1465]: time="2026-03-07T01:23:03.232745510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.724689492s" Mar 7 01:23:03.232852 containerd[1465]: time="2026-03-07T01:23:03.232836780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:23:03.234793 containerd[1465]: time="2026-03-07T01:23:03.234751041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:23:03.248860 containerd[1465]: time="2026-03-07T01:23:03.248794578Z" level=info msg="CreateContainer within sandbox \"6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:23:03.262824 containerd[1465]: time="2026-03-07T01:23:03.262793595Z" level=info msg="CreateContainer within sandbox \"6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f\"" Mar 7 01:23:03.263872 containerd[1465]: time="2026-03-07T01:23:03.263851575Z" level=info msg="StartContainer for \"e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f\"" Mar 7 01:23:03.310762 systemd[1]: Started cri-containerd-e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f.scope - libcontainer container e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f. Mar 7 01:23:03.358191 containerd[1465]: time="2026-03-07T01:23:03.357826072Z" level=info msg="StartContainer for \"e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f\" returns successfully" Mar 7 01:23:03.692151 kubelet[2548]: I0307 01:23:03.690460 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-57bff9d745-6q7rc" podStartSLOduration=20.088285268 podStartE2EDuration="23.690444178s" podCreationTimestamp="2026-03-07 01:22:40 +0000 UTC" firstStartedPulling="2026-03-07 01:22:56.905563258 +0000 UTC m=+32.634814477" lastFinishedPulling="2026-03-07 01:23:00.507722188 +0000 UTC m=+36.236973387" observedRunningTime="2026-03-07 01:23:00.675741182 +0000 UTC m=+36.404992401" watchObservedRunningTime="2026-03-07 01:23:03.690444178 +0000 UTC m=+39.419695387" Mar 7 01:23:03.736404 kubelet[2548]: I0307 01:23:03.735725 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-599474f6f5-25hl4" podStartSLOduration=16.416140933 podStartE2EDuration="22.735712921s" podCreationTimestamp="2026-03-07 01:22:41 +0000 UTC" firstStartedPulling="2026-03-07 01:22:56.913954302 +0000 UTC m=+32.643205501" lastFinishedPulling="2026-03-07 01:23:03.23352629 +0000 UTC m=+38.962777489" observedRunningTime="2026-03-07 01:23:03.690960109 +0000 UTC m=+39.420211308" watchObservedRunningTime="2026-03-07 01:23:03.735712921 +0000 UTC m=+39.464964130" Mar 7 01:23:03.913324 containerd[1465]: time="2026-03-07T01:23:03.913286320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:03.914075 containerd[1465]: time="2026-03-07T01:23:03.914037690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:23:03.914250 containerd[1465]: time="2026-03-07T01:23:03.914191730Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:03.916809 containerd[1465]: time="2026-03-07T01:23:03.916018191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:03.916809 containerd[1465]: time="2026-03-07T01:23:03.916710191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 681.91242ms" Mar 7 01:23:03.916809 containerd[1465]: time="2026-03-07T01:23:03.916732661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:23:03.918825 containerd[1465]: time="2026-03-07T01:23:03.918753112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:23:03.920896 containerd[1465]: time="2026-03-07T01:23:03.920872353Z" level=info msg="CreateContainer within sandbox \"166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:23:03.940876 containerd[1465]: time="2026-03-07T01:23:03.940855573Z" level=info msg="CreateContainer within sandbox \"166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e5eef75525e8a200446246f53468ea7190358dcb53555d2cc1040f8d3bb61af2\"" Mar 7 01:23:03.941487 containerd[1465]: time="2026-03-07T01:23:03.941465794Z" level=info msg="StartContainer for \"e5eef75525e8a200446246f53468ea7190358dcb53555d2cc1040f8d3bb61af2\"" Mar 7 01:23:03.968757 systemd[1]: Started cri-containerd-e5eef75525e8a200446246f53468ea7190358dcb53555d2cc1040f8d3bb61af2.scope - libcontainer container e5eef75525e8a200446246f53468ea7190358dcb53555d2cc1040f8d3bb61af2. Mar 7 01:23:04.009505 containerd[1465]: time="2026-03-07T01:23:04.009474168Z" level=info msg="StartContainer for \"e5eef75525e8a200446246f53468ea7190358dcb53555d2cc1040f8d3bb61af2\" returns successfully" Mar 7 01:23:04.071184 containerd[1465]: time="2026-03-07T01:23:04.071131969Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:04.071729 containerd[1465]: time="2026-03-07T01:23:04.071693209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:23:04.073706 containerd[1465]: time="2026-03-07T01:23:04.073618470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 154.837908ms" Mar 7 01:23:04.073706 containerd[1465]: time="2026-03-07T01:23:04.073686400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:23:04.077862 containerd[1465]: time="2026-03-07T01:23:04.077835082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:23:04.081786 containerd[1465]: time="2026-03-07T01:23:04.081758004Z" level=info msg="CreateContainer within sandbox \"973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:23:04.088347 containerd[1465]: time="2026-03-07T01:23:04.088323127Z" level=info msg="CreateContainer within sandbox \"973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2a647da147848e6bb554787bfb92a8d57d8d7053da83cf7004621b6ec36fb695\"" Mar 7 01:23:04.089601 containerd[1465]: time="2026-03-07T01:23:04.089565718Z" level=info msg="StartContainer for \"2a647da147848e6bb554787bfb92a8d57d8d7053da83cf7004621b6ec36fb695\"" Mar 7 01:23:04.117758 systemd[1]: Started cri-containerd-2a647da147848e6bb554787bfb92a8d57d8d7053da83cf7004621b6ec36fb695.scope - libcontainer container 2a647da147848e6bb554787bfb92a8d57d8d7053da83cf7004621b6ec36fb695. Mar 7 01:23:04.171060 containerd[1465]: time="2026-03-07T01:23:04.170930598Z" level=info msg="StartContainer for \"2a647da147848e6bb554787bfb92a8d57d8d7053da83cf7004621b6ec36fb695\" returns successfully" Mar 7 01:23:04.728916 kubelet[2548]: I0307 01:23:04.728599 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-57bff9d745-p4brr" podStartSLOduration=17.660299945 podStartE2EDuration="24.727722117s" podCreationTimestamp="2026-03-07 01:22:40 +0000 UTC" firstStartedPulling="2026-03-07 01:22:57.00995536 +0000 UTC m=+32.739206559" lastFinishedPulling="2026-03-07 01:23:04.077377532 +0000 UTC m=+39.806628731" observedRunningTime="2026-03-07 01:23:04.724490905 +0000 UTC m=+40.453742104" watchObservedRunningTime="2026-03-07 01:23:04.727722117 +0000 UTC m=+40.456973316" Mar 7 01:23:04.961641 containerd[1465]: time="2026-03-07T01:23:04.961581323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:04.962745 containerd[1465]: time="2026-03-07T01:23:04.962555364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:23:04.963490 containerd[1465]: time="2026-03-07T01:23:04.963304214Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:04.965289 containerd[1465]: time="2026-03-07T01:23:04.965266335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:04.966410 containerd[1465]: time="2026-03-07T01:23:04.966380546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 888.489274ms" Mar 7 01:23:04.966452 containerd[1465]: time="2026-03-07T01:23:04.966411106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:23:04.968445 containerd[1465]: time="2026-03-07T01:23:04.968282097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:23:04.972703 containerd[1465]: time="2026-03-07T01:23:04.972244249Z" level=info msg="CreateContainer within sandbox \"275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:23:04.990148 containerd[1465]: time="2026-03-07T01:23:04.989881128Z" level=info msg="CreateContainer within sandbox \"275c3deca86e62731ea4bcd00d3d71a2e837a8e2d6dbb7fe3ce39785036791c4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6dedfb3525f5e49d659b3b5a92e3351759644449a1ea67fbbbab637a1084fcfa\"" Mar 7 01:23:04.992812 containerd[1465]: time="2026-03-07T01:23:04.992782689Z" level=info msg="StartContainer for \"6dedfb3525f5e49d659b3b5a92e3351759644449a1ea67fbbbab637a1084fcfa\"" Mar 7 01:23:04.995296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547416433.mount: Deactivated successfully. Mar 7 01:23:05.038747 systemd[1]: Started cri-containerd-6dedfb3525f5e49d659b3b5a92e3351759644449a1ea67fbbbab637a1084fcfa.scope - libcontainer container 6dedfb3525f5e49d659b3b5a92e3351759644449a1ea67fbbbab637a1084fcfa. Mar 7 01:23:05.082586 containerd[1465]: time="2026-03-07T01:23:05.082547214Z" level=info msg="StartContainer for \"6dedfb3525f5e49d659b3b5a92e3351759644449a1ea67fbbbab637a1084fcfa\" returns successfully" Mar 7 01:23:05.499826 kubelet[2548]: I0307 01:23:05.499771 2548 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:23:05.499826 kubelet[2548]: I0307 01:23:05.499810 2548 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:23:05.712506 kubelet[2548]: I0307 01:23:05.712373 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:23:05.726649 kubelet[2548]: I0307 01:23:05.726497 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-69l94" podStartSLOduration=14.70690029 podStartE2EDuration="24.726483516s" podCreationTimestamp="2026-03-07 01:22:41 +0000 UTC" firstStartedPulling="2026-03-07 01:22:54.94767234 +0000 UTC m=+30.676923539" lastFinishedPulling="2026-03-07 01:23:04.967255566 +0000 UTC m=+40.696506765" observedRunningTime="2026-03-07 01:23:05.724457265 +0000 UTC m=+41.453708474" watchObservedRunningTime="2026-03-07 01:23:05.726483516 +0000 UTC m=+41.455734725" Mar 7 01:23:05.916430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522546660.mount: Deactivated successfully. Mar 7 01:23:05.931349 containerd[1465]: time="2026-03-07T01:23:05.930585108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:05.931349 containerd[1465]: time="2026-03-07T01:23:05.931303948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:23:05.931902 containerd[1465]: time="2026-03-07T01:23:05.931859008Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:05.934069 containerd[1465]: time="2026-03-07T01:23:05.934028089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:23:05.934812 containerd[1465]: time="2026-03-07T01:23:05.934707240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 966.395913ms" Mar 7 01:23:05.934812 containerd[1465]: time="2026-03-07T01:23:05.934746400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:23:05.938872 containerd[1465]: time="2026-03-07T01:23:05.938747462Z" level=info msg="CreateContainer within sandbox \"166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:23:05.954181 containerd[1465]: time="2026-03-07T01:23:05.954125729Z" level=info msg="CreateContainer within sandbox \"166047e0fe5c8c8ee9b2663315ff97128bc9be9b249c143aedcfaa9b72b5b100\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"327d2a2e5dc5c00bc0bb90214c32c76a8830d305a0f7826eeeda1cebd285db16\"" Mar 7 01:23:05.957443 containerd[1465]: time="2026-03-07T01:23:05.957420181Z" level=info msg="StartContainer for \"327d2a2e5dc5c00bc0bb90214c32c76a8830d305a0f7826eeeda1cebd285db16\"" Mar 7 01:23:06.002787 systemd[1]: Started cri-containerd-327d2a2e5dc5c00bc0bb90214c32c76a8830d305a0f7826eeeda1cebd285db16.scope - libcontainer container 327d2a2e5dc5c00bc0bb90214c32c76a8830d305a0f7826eeeda1cebd285db16. Mar 7 01:23:06.049244 containerd[1465]: time="2026-03-07T01:23:06.048934287Z" level=info msg="StartContainer for \"327d2a2e5dc5c00bc0bb90214c32c76a8830d305a0f7826eeeda1cebd285db16\" returns successfully" Mar 7 01:23:10.711679 kubelet[2548]: I0307 01:23:10.710903 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:23:10.735184 systemd[1]: run-containerd-runc-k8s.io-13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c-runc.sXA8xG.mount: Deactivated successfully. Mar 7 01:23:10.822654 kubelet[2548]: I0307 01:23:10.817762 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-6b4c74794-r8c4h" podStartSLOduration=6.868935399 podStartE2EDuration="15.81774943s" podCreationTimestamp="2026-03-07 01:22:55 +0000 UTC" firstStartedPulling="2026-03-07 01:22:56.986599409 +0000 UTC m=+32.715850618" lastFinishedPulling="2026-03-07 01:23:05.93541345 +0000 UTC m=+41.664664649" observedRunningTime="2026-03-07 01:23:06.724510914 +0000 UTC m=+42.453762113" watchObservedRunningTime="2026-03-07 01:23:10.81774943 +0000 UTC m=+46.547000639" Mar 7 01:23:24.385080 containerd[1465]: time="2026-03-07T01:23:24.385042955Z" level=info msg="StopPodSandbox for \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\"" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.425 [WARNING][5209] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac", Pod:"goldmane-9f7667bb8-7qq69", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70d9e9a01c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.425 [INFO][5209] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.425 [INFO][5209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" iface="eth0" netns="" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.425 [INFO][5209] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.425 [INFO][5209] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.450 [INFO][5219] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.450 [INFO][5219] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.450 [INFO][5219] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.456 [WARNING][5219] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.456 [INFO][5219] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.458 [INFO][5219] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.467911 containerd[1465]: 2026-03-07 01:23:24.462 [INFO][5209] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.467911 containerd[1465]: time="2026-03-07T01:23:24.467770262Z" level=info msg="TearDown network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\" successfully" Mar 7 01:23:24.467911 containerd[1465]: time="2026-03-07T01:23:24.467793522Z" level=info msg="StopPodSandbox for \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\" returns successfully" Mar 7 01:23:24.468501 containerd[1465]: time="2026-03-07T01:23:24.468463673Z" level=info msg="RemovePodSandbox for \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\"" Mar 7 01:23:24.468551 containerd[1465]: time="2026-03-07T01:23:24.468532473Z" level=info msg="Forcibly stopping sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\"" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.502 [WARNING][5233] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"7aac9be5-f2a8-4947-a1f7-e67a1f82abd2", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"c265511b4b17d054fcb76717847197262554dc9de150ed7349c7c9f424f7a0ac", Pod:"goldmane-9f7667bb8-7qq69", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali70d9e9a01c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.503 [INFO][5233] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.503 [INFO][5233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" iface="eth0" netns="" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.503 [INFO][5233] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.503 [INFO][5233] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.523 [INFO][5240] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.523 [INFO][5240] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.523 [INFO][5240] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.528 [WARNING][5240] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.528 [INFO][5240] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" HandleID="k8s-pod-network.df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Workload="172--232--28--122-k8s-goldmane--9f7667bb8--7qq69-eth0" Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.529 [INFO][5240] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.534695 containerd[1465]: 2026-03-07 01:23:24.531 [INFO][5233] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22" Mar 7 01:23:24.534695 containerd[1465]: time="2026-03-07T01:23:24.533993053Z" level=info msg="TearDown network for sandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\" successfully" Mar 7 01:23:24.538719 containerd[1465]: time="2026-03-07T01:23:24.538689903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:23:24.538780 containerd[1465]: time="2026-03-07T01:23:24.538754923Z" level=info msg="RemovePodSandbox \"df2d44c7b61d0321ba1314f6992320bdea8264a14df9c293e48a1f3717b32d22\" returns successfully" Mar 7 01:23:24.539237 containerd[1465]: time="2026-03-07T01:23:24.539215635Z" level=info msg="StopPodSandbox for \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\"" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.569 [WARNING][5254] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0", GenerateName:"calico-kube-controllers-599474f6f5-", Namespace:"calico-system", SelfLink:"", UID:"4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599474f6f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792", Pod:"calico-kube-controllers-599474f6f5-25hl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8916e999724", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.569 [INFO][5254] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.569 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" iface="eth0" netns="" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.569 [INFO][5254] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.569 [INFO][5254] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.586 [INFO][5261] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.586 [INFO][5261] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.586 [INFO][5261] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.592 [WARNING][5261] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.592 [INFO][5261] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.593 [INFO][5261] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.597511 containerd[1465]: 2026-03-07 01:23:24.595 [INFO][5254] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.598327 containerd[1465]: time="2026-03-07T01:23:24.597546770Z" level=info msg="TearDown network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\" successfully" Mar 7 01:23:24.598327 containerd[1465]: time="2026-03-07T01:23:24.597570450Z" level=info msg="StopPodSandbox for \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\" returns successfully" Mar 7 01:23:24.598554 containerd[1465]: time="2026-03-07T01:23:24.598514101Z" level=info msg="RemovePodSandbox for \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\"" Mar 7 01:23:24.598554 containerd[1465]: time="2026-03-07T01:23:24.598547191Z" level=info msg="Forcibly stopping sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\"" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.628 [WARNING][5275] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0", GenerateName:"calico-kube-controllers-599474f6f5-", Namespace:"calico-system", SelfLink:"", UID:"4ebdd347-5ce7-4d0e-95dd-1d2bf0c987de", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"599474f6f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"6ca6c2dc314b8e513a4b5729769c5c56dc260ae2d23dac4c615072a50d9cd792", Pod:"calico-kube-controllers-599474f6f5-25hl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8916e999724", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.629 [INFO][5275] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.629 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" iface="eth0" netns="" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.629 [INFO][5275] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.629 [INFO][5275] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.653 [INFO][5282] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.653 [INFO][5282] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.653 [INFO][5282] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.664 [WARNING][5282] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.664 [INFO][5282] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" HandleID="k8s-pod-network.808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Workload="172--232--28--122-k8s-calico--kube--controllers--599474f6f5--25hl4-eth0" Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.665 [INFO][5282] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.671210 containerd[1465]: 2026-03-07 01:23:24.667 [INFO][5275] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a" Mar 7 01:23:24.671210 containerd[1465]: time="2026-03-07T01:23:24.670082934Z" level=info msg="TearDown network for sandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\" successfully" Mar 7 01:23:24.673108 containerd[1465]: time="2026-03-07T01:23:24.673076970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:23:24.673184 containerd[1465]: time="2026-03-07T01:23:24.673134960Z" level=info msg="RemovePodSandbox \"808973ddfc8808aeabe2a35b2d2cd4e45055f8ff5210e6cf7fa50d60a908410a\" returns successfully" Mar 7 01:23:24.673620 containerd[1465]: time="2026-03-07T01:23:24.673601662Z" level=info msg="StopPodSandbox for \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\"" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.703 [WARNING][5297] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"a4e013e7-4a81-4a32-a9dd-da65e551cd48", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70", Pod:"calico-apiserver-57bff9d745-p4brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8267d1a29fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.703 [INFO][5297] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.703 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" iface="eth0" netns="" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.703 [INFO][5297] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.703 [INFO][5297] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.722 [INFO][5305] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.722 [INFO][5305] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.722 [INFO][5305] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.727 [WARNING][5305] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.727 [INFO][5305] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.728 [INFO][5305] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.733895 containerd[1465]: 2026-03-07 01:23:24.731 [INFO][5297] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.734484 containerd[1465]: time="2026-03-07T01:23:24.734028991Z" level=info msg="TearDown network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\" successfully" Mar 7 01:23:24.734484 containerd[1465]: time="2026-03-07T01:23:24.734055371Z" level=info msg="StopPodSandbox for \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\" returns successfully" Mar 7 01:23:24.735089 containerd[1465]: time="2026-03-07T01:23:24.735037022Z" level=info msg="RemovePodSandbox for \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\"" Mar 7 01:23:24.735141 containerd[1465]: time="2026-03-07T01:23:24.735091443Z" level=info msg="Forcibly stopping sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\"" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.773 [WARNING][5319] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"a4e013e7-4a81-4a32-a9dd-da65e551cd48", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"973f0d3e725500e4bc370e937d9393787f132e0305ffca1a275416292baf8c70", Pod:"calico-apiserver-57bff9d745-p4brr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8267d1a29fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.773 [INFO][5319] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.773 [INFO][5319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" iface="eth0" netns="" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.773 [INFO][5319] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.773 [INFO][5319] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.793 [INFO][5326] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.793 [INFO][5326] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.793 [INFO][5326] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.798 [WARNING][5326] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.798 [INFO][5326] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" HandleID="k8s-pod-network.7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--p4brr-eth0" Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.799 [INFO][5326] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.804158 containerd[1465]: 2026-03-07 01:23:24.801 [INFO][5319] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709" Mar 7 01:23:24.804557 containerd[1465]: time="2026-03-07T01:23:24.804238591Z" level=info msg="TearDown network for sandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\" successfully" Mar 7 01:23:24.807880 containerd[1465]: time="2026-03-07T01:23:24.807829448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:23:24.807926 containerd[1465]: time="2026-03-07T01:23:24.807880398Z" level=info msg="RemovePodSandbox \"7291f2700b83c260b72b73f26bf49341bc92307c25f55ea2aa55ecc6bb0b2709\" returns successfully" Mar 7 01:23:24.810868 containerd[1465]: time="2026-03-07T01:23:24.808439559Z" level=info msg="StopPodSandbox for \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\"" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.864 [WARNING][5340] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"15881307-f8ce-4e05-8cbd-e62d67c74c8e", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2", Pod:"calico-apiserver-57bff9d745-6q7rc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4149000f018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.865 [INFO][5340] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.865 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" iface="eth0" netns="" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.865 [INFO][5340] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.865 [INFO][5340] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.890 [INFO][5349] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.890 [INFO][5349] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.891 [INFO][5349] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.896 [WARNING][5349] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.896 [INFO][5349] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.898 [INFO][5349] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.902487 containerd[1465]: 2026-03-07 01:23:24.900 [INFO][5340] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.903259 containerd[1465]: time="2026-03-07T01:23:24.902509560Z" level=info msg="TearDown network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\" successfully" Mar 7 01:23:24.903259 containerd[1465]: time="2026-03-07T01:23:24.902532410Z" level=info msg="StopPodSandbox for \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\" returns successfully" Mar 7 01:23:24.903259 containerd[1465]: time="2026-03-07T01:23:24.903093421Z" level=info msg="RemovePodSandbox for \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\"" Mar 7 01:23:24.903259 containerd[1465]: time="2026-03-07T01:23:24.903118721Z" level=info msg="Forcibly stopping sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\"" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.934 [WARNING][5363] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0", GenerateName:"calico-apiserver-57bff9d745-", Namespace:"calico-system", SelfLink:"", UID:"15881307-f8ce-4e05-8cbd-e62d67c74c8e", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57bff9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"cb2c1706b0df88f310633cfdaecf5ce55483179fbc50cb5211049a978e453ed2", Pod:"calico-apiserver-57bff9d745-6q7rc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4149000f018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.934 [INFO][5363] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.934 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" iface="eth0" netns="" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.934 [INFO][5363] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.934 [INFO][5363] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.957 [INFO][5370] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.957 [INFO][5370] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.957 [INFO][5370] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.963 [WARNING][5370] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.963 [INFO][5370] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" HandleID="k8s-pod-network.3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Workload="172--232--28--122-k8s-calico--apiserver--57bff9d745--6q7rc-eth0" Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.964 [INFO][5370] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:24.969198 containerd[1465]: 2026-03-07 01:23:24.966 [INFO][5363] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e" Mar 7 01:23:24.969198 containerd[1465]: time="2026-03-07T01:23:24.969149472Z" level=info msg="TearDown network for sandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\" successfully" Mar 7 01:23:24.972560 containerd[1465]: time="2026-03-07T01:23:24.972513270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:23:24.972720 containerd[1465]: time="2026-03-07T01:23:24.972563200Z" level=info msg="RemovePodSandbox \"3429b9e02ea5fd79f9370066abcf8081ff05f9902f627be6cd4136b2f42bc83e\" returns successfully" Mar 7 01:23:24.973001 containerd[1465]: time="2026-03-07T01:23:24.972978311Z" level=info msg="StopPodSandbox for \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\"" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.002 [WARNING][5384] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" WorkloadEndpoint="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.002 [INFO][5384] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.002 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" iface="eth0" netns="" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.002 [INFO][5384] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.002 [INFO][5384] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.020 [INFO][5391] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.020 [INFO][5391] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.020 [INFO][5391] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.025 [WARNING][5391] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.025 [INFO][5391] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.027 [INFO][5391] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:25.031452 containerd[1465]: 2026-03-07 01:23:25.029 [INFO][5384] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.032042 containerd[1465]: time="2026-03-07T01:23:25.031726494Z" level=info msg="TearDown network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\" successfully" Mar 7 01:23:25.032042 containerd[1465]: time="2026-03-07T01:23:25.031749994Z" level=info msg="StopPodSandbox for \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\" returns successfully" Mar 7 01:23:25.032465 containerd[1465]: time="2026-03-07T01:23:25.032444705Z" level=info msg="RemovePodSandbox for \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\"" Mar 7 01:23:25.032513 containerd[1465]: time="2026-03-07T01:23:25.032472585Z" level=info msg="Forcibly stopping sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\"" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.065 [WARNING][5405] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" WorkloadEndpoint="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.066 [INFO][5405] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.066 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" iface="eth0" netns="" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.066 [INFO][5405] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.066 [INFO][5405] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.086 [INFO][5413] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.086 [INFO][5413] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.086 [INFO][5413] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.091 [WARNING][5413] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.091 [INFO][5413] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" HandleID="k8s-pod-network.96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Workload="172--232--28--122-k8s-whisker--57685c8f89--t66t7-eth0" Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.092 [INFO][5413] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:25.097009 containerd[1465]: 2026-03-07 01:23:25.094 [INFO][5405] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a" Mar 7 01:23:25.097009 containerd[1465]: time="2026-03-07T01:23:25.096898046Z" level=info msg="TearDown network for sandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\" successfully" Mar 7 01:23:25.100397 containerd[1465]: time="2026-03-07T01:23:25.100248593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:23:25.100397 containerd[1465]: time="2026-03-07T01:23:25.100313423Z" level=info msg="RemovePodSandbox \"96831a1ebdc93dcab960e3a29958a0e1f5f06855ec06cd0735503ba978d25a4a\" returns successfully" Mar 7 01:23:25.101029 containerd[1465]: time="2026-03-07T01:23:25.100783624Z" level=info msg="StopPodSandbox for \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\"" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.131 [WARNING][5427] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10", Pod:"coredns-7d764666f9-zdzmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8889f1a8f3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.131 [INFO][5427] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.131 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" iface="eth0" netns="" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.131 [INFO][5427] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.131 [INFO][5427] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.149 [INFO][5434] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.149 [INFO][5434] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.149 [INFO][5434] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.154 [WARNING][5434] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.154 [INFO][5434] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.156 [INFO][5434] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:25.160527 containerd[1465]: 2026-03-07 01:23:25.158 [INFO][5427] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.160946 containerd[1465]: time="2026-03-07T01:23:25.160599226Z" level=info msg="TearDown network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\" successfully" Mar 7 01:23:25.160946 containerd[1465]: time="2026-03-07T01:23:25.160667836Z" level=info msg="StopPodSandbox for \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\" returns successfully" Mar 7 01:23:25.161225 containerd[1465]: time="2026-03-07T01:23:25.161200406Z" level=info msg="RemovePodSandbox for \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\"" Mar 7 01:23:25.161477 containerd[1465]: time="2026-03-07T01:23:25.161231026Z" level=info msg="Forcibly stopping sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\"" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.194 [WARNING][5448] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cb5fbdd0-b27a-4ac5-bfce-f97041d5d5e0", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"cfcd5d5a8093bc382f209e56c022f32164c23dacdb3e73c0a45a3bd4e83c2f10", Pod:"coredns-7d764666f9-zdzmq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8889f1a8f3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.195 [INFO][5448] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.195 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" iface="eth0" netns="" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.195 [INFO][5448] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.195 [INFO][5448] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.215 [INFO][5456] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.215 [INFO][5456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.215 [INFO][5456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.220 [WARNING][5456] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.220 [INFO][5456] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" HandleID="k8s-pod-network.5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Workload="172--232--28--122-k8s-coredns--7d764666f9--zdzmq-eth0" Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.221 [INFO][5456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:25.226621 containerd[1465]: 2026-03-07 01:23:25.223 [INFO][5448] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14" Mar 7 01:23:25.227967 containerd[1465]: time="2026-03-07T01:23:25.226663980Z" level=info msg="TearDown network for sandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\" successfully" Mar 7 01:23:25.230984 containerd[1465]: time="2026-03-07T01:23:25.230963109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:23:25.231029 containerd[1465]: time="2026-03-07T01:23:25.231015219Z" level=info msg="RemovePodSandbox \"5b3d3f130bf612f1bedcf68ee91f280414068f4147495d26da78b3bb8a0ded14\" returns successfully" Mar 7 01:23:25.231517 containerd[1465]: time="2026-03-07T01:23:25.231497999Z" level=info msg="StopPodSandbox for \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\"" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.268 [WARNING][5470] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"52fafc1f-6106-4d15-bfa6-45d2f9cd684f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488", Pod:"coredns-7d764666f9-db7dz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecbac781538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.269 [INFO][5470] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.269 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" iface="eth0" netns="" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.269 [INFO][5470] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.269 [INFO][5470] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.288 [INFO][5477] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.289 [INFO][5477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.289 [INFO][5477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.294 [WARNING][5477] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.294 [INFO][5477] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.296 [INFO][5477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:25.300668 containerd[1465]: 2026-03-07 01:23:25.298 [INFO][5470] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.300668 containerd[1465]: time="2026-03-07T01:23:25.300524080Z" level=info msg="TearDown network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\" successfully" Mar 7 01:23:25.300668 containerd[1465]: time="2026-03-07T01:23:25.300551210Z" level=info msg="StopPodSandbox for \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\" returns successfully" Mar 7 01:23:25.301509 containerd[1465]: time="2026-03-07T01:23:25.301475792Z" level=info msg="RemovePodSandbox for \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\"" Mar 7 01:23:25.301561 containerd[1465]: time="2026-03-07T01:23:25.301515122Z" level=info msg="Forcibly stopping sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\"" Mar 7 01:23:25.345986 kubelet[2548]: I0307 01:23:25.345339 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.332 [WARNING][5491] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"52fafc1f-6106-4d15-bfa6-45d2f9cd684f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 22, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-28-122", ContainerID:"4daeea53fa1257f87e6488d1d46859553beaef3c607b814841c7a35016648488", Pod:"coredns-7d764666f9-db7dz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecbac781538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.332 [INFO][5491] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.332 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" iface="eth0" netns="" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.332 [INFO][5491] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.332 [INFO][5491] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.359 [INFO][5498] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.359 [INFO][5498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.359 [INFO][5498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.368 [WARNING][5498] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.368 [INFO][5498] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" HandleID="k8s-pod-network.aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Workload="172--232--28--122-k8s-coredns--7d764666f9--db7dz-eth0" Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.375 [INFO][5498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:23:25.381504 containerd[1465]: 2026-03-07 01:23:25.378 [INFO][5491] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2" Mar 7 01:23:25.382683 containerd[1465]: time="2026-03-07T01:23:25.382014236Z" level=info msg="TearDown network for sandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\" successfully" Mar 7 01:23:25.386760 containerd[1465]: time="2026-03-07T01:23:25.386738175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:23:25.387859 containerd[1465]: time="2026-03-07T01:23:25.387259776Z" level=info msg="RemovePodSandbox \"aebacf875be31f763a17aa02fcd68c6fe56d53a70b65e355273942dae7fd6eb2\" returns successfully" Mar 7 01:23:40.817947 systemd[1]: run-containerd-runc-k8s.io-13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c-runc.QrjydB.mount: Deactivated successfully. Mar 7 01:23:43.496112 systemd[1]: run-containerd-runc-k8s.io-13eb91c1cadb33978db6de2503862efaff1c6f5db8439d4a443e0a9253f0504c-runc.06ldIN.mount: Deactivated successfully. Mar 7 01:23:47.643727 kubelet[2548]: I0307 01:23:47.643483 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:23:52.392427 kubelet[2548]: E0307 01:23:52.392110 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:23:53.391901 kubelet[2548]: E0307 01:23:53.391862 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:23:55.391829 kubelet[2548]: E0307 01:23:55.391785 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:23:57.143109 systemd[1]: run-containerd-runc-k8s.io-f2dc075017292c423373188977ff5e67ecbe15cacb368503c7370e754c84547a-runc.8sgoGk.mount: Deactivated successfully. Mar 7 01:24:06.399471 kubelet[2548]: E0307 01:24:06.397709 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:24:11.441579 kubelet[2548]: I0307 01:24:11.441537 2548 ???:1] "http: TLS handshake error from 192.168.159.117:56522: EOF" Mar 7 01:24:13.392589 kubelet[2548]: E0307 01:24:13.392314 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:24:14.393534 kubelet[2548]: E0307 01:24:14.392607 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:24:19.364704 systemd[1]: run-containerd-runc-k8s.io-e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f-runc.Slo8Fd.mount: Deactivated successfully. Mar 7 01:24:21.392433 kubelet[2548]: E0307 01:24:21.392392 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:24:23.599378 kubelet[2548]: I0307 01:24:23.599283 2548 ???:1] "http: TLS handshake error from 192.168.159.117:55974: EOF" Mar 7 01:24:23.806453 kubelet[2548]: I0307 01:24:23.806385 2548 ???:1] "http: TLS handshake error from 192.168.159.117:57840: client sent an HTTP request to an HTTPS server" Mar 7 01:24:27.138033 systemd[1]: run-containerd-runc-k8s.io-f2dc075017292c423373188977ff5e67ecbe15cacb368503c7370e754c84547a-runc.zxUaZr.mount: Deactivated successfully. Mar 7 01:24:32.693242 systemd[1]: Started sshd@7-172.232.28.122:22-68.220.241.50:40706.service - OpenSSH per-connection server daemon (68.220.241.50:40706). Mar 7 01:24:32.851026 sshd[5762]: Accepted publickey for core from 68.220.241.50 port 40706 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:32.852053 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:32.857711 systemd-logind[1450]: New session 8 of user core. Mar 7 01:24:32.863776 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:24:33.047596 sshd[5762]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:33.056329 systemd[1]: sshd@7-172.232.28.122:22-68.220.241.50:40706.service: Deactivated successfully. Mar 7 01:24:33.059347 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:24:33.060183 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:24:33.062154 systemd-logind[1450]: Removed session 8. Mar 7 01:24:33.697557 systemd[1]: run-containerd-runc-k8s.io-e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f-runc.IoQ9jX.mount: Deactivated successfully. Mar 7 01:24:38.082546 systemd[1]: Started sshd@8-172.232.28.122:22-68.220.241.50:40714.service - OpenSSH per-connection server daemon (68.220.241.50:40714). Mar 7 01:24:38.236726 sshd[5806]: Accepted publickey for core from 68.220.241.50 port 40714 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:38.239007 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:38.245600 systemd-logind[1450]: New session 9 of user core. Mar 7 01:24:38.248834 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:24:38.439281 sshd[5806]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:38.443698 systemd[1]: sshd@8-172.232.28.122:22-68.220.241.50:40714.service: Deactivated successfully. Mar 7 01:24:38.446579 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:24:38.447551 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:24:38.449278 systemd-logind[1450]: Removed session 9. Mar 7 01:24:43.471087 systemd[1]: Started sshd@9-172.232.28.122:22-68.220.241.50:45524.service - OpenSSH per-connection server daemon (68.220.241.50:45524). Mar 7 01:24:43.627702 sshd[5839]: Accepted publickey for core from 68.220.241.50 port 45524 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:43.628443 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:43.634348 systemd-logind[1450]: New session 10 of user core. Mar 7 01:24:43.642775 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:24:43.819237 sshd[5839]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:43.824302 systemd[1]: sshd@9-172.232.28.122:22-68.220.241.50:45524.service: Deactivated successfully. Mar 7 01:24:43.826869 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:24:43.827599 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:24:43.829136 systemd-logind[1450]: Removed session 10. Mar 7 01:24:43.852140 systemd[1]: Started sshd@10-172.232.28.122:22-68.220.241.50:45536.service - OpenSSH per-connection server daemon (68.220.241.50:45536). Mar 7 01:24:44.004667 sshd[5874]: Accepted publickey for core from 68.220.241.50 port 45536 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:44.008150 sshd[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:44.014490 systemd-logind[1450]: New session 11 of user core. Mar 7 01:24:44.017804 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:24:44.228105 sshd[5874]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:44.234257 systemd[1]: sshd@10-172.232.28.122:22-68.220.241.50:45536.service: Deactivated successfully. Mar 7 01:24:44.237113 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:24:44.240208 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:24:44.241815 systemd-logind[1450]: Removed session 11. Mar 7 01:24:44.267921 systemd[1]: Started sshd@11-172.232.28.122:22-68.220.241.50:45540.service - OpenSSH per-connection server daemon (68.220.241.50:45540). Mar 7 01:24:44.418181 sshd[5885]: Accepted publickey for core from 68.220.241.50 port 45540 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:44.420797 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:44.426955 systemd-logind[1450]: New session 12 of user core. Mar 7 01:24:44.433799 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:24:44.615426 sshd[5885]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:44.622624 systemd[1]: sshd@11-172.232.28.122:22-68.220.241.50:45540.service: Deactivated successfully. Mar 7 01:24:44.622956 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:24:44.625914 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:24:44.629706 systemd-logind[1450]: Removed session 12. Mar 7 01:24:49.647949 systemd[1]: Started sshd@12-172.232.28.122:22-68.220.241.50:45548.service - OpenSSH per-connection server daemon (68.220.241.50:45548). Mar 7 01:24:49.811561 sshd[5922]: Accepted publickey for core from 68.220.241.50 port 45548 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:49.815277 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:49.821011 systemd-logind[1450]: New session 13 of user core. Mar 7 01:24:49.824792 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:24:50.021820 sshd[5922]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:50.029478 systemd[1]: sshd@12-172.232.28.122:22-68.220.241.50:45548.service: Deactivated successfully. Mar 7 01:24:50.033136 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:24:50.034385 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:24:50.035523 systemd-logind[1450]: Removed session 13. Mar 7 01:24:50.071210 systemd[1]: Started sshd@13-172.232.28.122:22-68.220.241.50:45556.service - OpenSSH per-connection server daemon (68.220.241.50:45556). Mar 7 01:24:50.251815 sshd[5935]: Accepted publickey for core from 68.220.241.50 port 45556 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:50.253711 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:50.258643 systemd-logind[1450]: New session 14 of user core. Mar 7 01:24:50.263760 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:24:50.669089 sshd[5935]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:50.674230 systemd[1]: sshd@13-172.232.28.122:22-68.220.241.50:45556.service: Deactivated successfully. Mar 7 01:24:50.674570 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:24:50.677073 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:24:50.678966 systemd-logind[1450]: Removed session 14. Mar 7 01:24:50.706211 systemd[1]: Started sshd@14-172.232.28.122:22-68.220.241.50:45568.service - OpenSSH per-connection server daemon (68.220.241.50:45568). Mar 7 01:24:50.865193 sshd[5946]: Accepted publickey for core from 68.220.241.50 port 45568 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:50.867267 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:50.873084 systemd-logind[1450]: New session 15 of user core. Mar 7 01:24:50.879060 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:24:51.506738 sshd[5946]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:51.510938 systemd[1]: sshd@14-172.232.28.122:22-68.220.241.50:45568.service: Deactivated successfully. Mar 7 01:24:51.515170 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:24:51.517723 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:24:51.519619 systemd-logind[1450]: Removed session 15. Mar 7 01:24:51.540282 systemd[1]: Started sshd@15-172.232.28.122:22-68.220.241.50:45578.service - OpenSSH per-connection server daemon (68.220.241.50:45578). Mar 7 01:24:51.688274 sshd[5970]: Accepted publickey for core from 68.220.241.50 port 45578 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:51.690556 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:51.698439 systemd-logind[1450]: New session 16 of user core. Mar 7 01:24:51.704372 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:24:52.010482 sshd[5970]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:52.015378 systemd[1]: sshd@15-172.232.28.122:22-68.220.241.50:45578.service: Deactivated successfully. Mar 7 01:24:52.019147 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:24:52.020612 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:24:52.021839 systemd-logind[1450]: Removed session 16. Mar 7 01:24:52.050890 systemd[1]: Started sshd@16-172.232.28.122:22-68.220.241.50:45580.service - OpenSSH per-connection server daemon (68.220.241.50:45580). Mar 7 01:24:52.198805 sshd[5981]: Accepted publickey for core from 68.220.241.50 port 45580 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:52.200566 sshd[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:52.206106 systemd-logind[1450]: New session 17 of user core. Mar 7 01:24:52.209768 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:24:52.393917 sshd[5981]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:52.399984 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:24:52.401185 systemd[1]: sshd@16-172.232.28.122:22-68.220.241.50:45580.service: Deactivated successfully. Mar 7 01:24:52.403826 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:24:52.405097 systemd-logind[1450]: Removed session 17. Mar 7 01:24:53.392420 kubelet[2548]: E0307 01:24:53.392371 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Mar 7 01:24:57.428951 systemd[1]: Started sshd@17-172.232.28.122:22-68.220.241.50:51642.service - OpenSSH per-connection server daemon (68.220.241.50:51642). Mar 7 01:24:57.587305 sshd[6019]: Accepted publickey for core from 68.220.241.50 port 51642 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:24:57.589117 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:57.594503 systemd-logind[1450]: New session 18 of user core. Mar 7 01:24:57.599790 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:24:57.768181 sshd[6019]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:57.772310 systemd[1]: sshd@17-172.232.28.122:22-68.220.241.50:51642.service: Deactivated successfully. Mar 7 01:24:57.776180 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:24:57.777140 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:24:57.778275 systemd-logind[1450]: Removed session 18. Mar 7 01:25:02.813148 systemd[1]: Started sshd@18-172.232.28.122:22-68.220.241.50:55258.service - OpenSSH per-connection server daemon (68.220.241.50:55258). Mar 7 01:25:02.982222 sshd[6034]: Accepted publickey for core from 68.220.241.50 port 55258 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:25:02.984041 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:02.988428 systemd-logind[1450]: New session 19 of user core. Mar 7 01:25:02.993947 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:25:03.185583 sshd[6034]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:03.190021 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:25:03.190929 systemd[1]: sshd@18-172.232.28.122:22-68.220.241.50:55258.service: Deactivated successfully. Mar 7 01:25:03.195541 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:25:03.201964 systemd-logind[1450]: Removed session 19. Mar 7 01:25:03.697254 systemd[1]: run-containerd-runc-k8s.io-e4512cfe1546f4447edbb94572585f78f6c0156d0fa3ce2a33fdd5a58ac9175f-runc.bY6QeD.mount: Deactivated successfully. Mar 7 01:25:04.393577 kubelet[2548]: E0307 01:25:04.392800 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22"