Mar 7 01:16:11.989821 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:16:11.989841 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:16:11.989850 kernel: BIOS-provided physical RAM map: Mar 7 01:16:11.989856 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:16:11.989861 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:16:11.989870 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:16:11.989876 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:16:11.989882 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:16:11.989888 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:16:11.989894 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:16:11.989900 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:16:11.989905 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:16:11.989911 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:16:11.989919 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:16:11.989926 kernel: NX (Execute Disable) protection: active Mar 7 01:16:11.989961 kernel: APIC: Static calls initialized Mar 7 01:16:11.989967 kernel: SMBIOS 2.8 present. Mar 7 01:16:11.989973 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:16:11.989979 kernel: Hypervisor detected: KVM Mar 7 01:16:11.989988 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:16:11.989994 kernel: kvm-clock: using sched offset of 6027654676 cycles Mar 7 01:16:11.990000 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:16:11.990007 kernel: tsc: Detected 1999.999 MHz processor Mar 7 01:16:11.990013 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:16:11.990020 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:16:11.990026 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:16:11.990032 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:16:11.990039 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:16:11.990047 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:16:11.990053 kernel: Using GB pages for direct mapping Mar 7 01:16:11.990060 kernel: ACPI: Early table checksum verification disabled Mar 7 01:16:11.990066 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:16:11.990072 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990078 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990085 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990091 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:16:11.990097 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990106 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990112 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990118 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990128 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:16:11.990135 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:16:11.990141 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:16:11.990151 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:16:11.990157 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:16:11.990164 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:16:11.990170 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:16:11.990177 kernel: No NUMA configuration found Mar 7 01:16:11.990183 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:16:11.990190 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 7 01:16:11.990196 kernel: Zone ranges: Mar 7 01:16:11.991282 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:16:11.991291 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:16:11.991298 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:16:11.991304 kernel: Movable zone start for each node Mar 7 01:16:11.991316 kernel: Early memory node ranges Mar 7 01:16:11.991327 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:16:11.991337 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:16:11.991348 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:16:11.991359 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:16:11.991370 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:16:11.991385 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:16:11.991395 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:16:11.991406 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:16:11.991417 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:16:11.991427 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:16:11.991437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:16:11.991448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:16:11.991459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:16:11.991470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:16:11.991484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:16:11.991495 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:16:11.991506 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:16:11.991517 kernel: TSC deadline timer available Mar 7 01:16:11.991528 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:16:11.991539 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:16:11.991550 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:16:11.991561 kernel: kvm-guest: setup PV sched yield Mar 7 01:16:11.991572 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:16:11.991587 kernel: Booting paravirtualized kernel on KVM Mar 7 01:16:11.991598 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:16:11.991608 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:16:11.991619 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:16:11.991630 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:16:11.991641 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:16:11.991651 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:16:11.991662 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:16:11.991674 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:16:11.991688 kernel: random: crng init done Mar 7 01:16:11.991699 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:16:11.991710 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:16:11.991720 kernel: Fallback order for Node 0: 0 Mar 7 01:16:11.991731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:16:11.991742 kernel: Policy zone: Normal Mar 7 01:16:11.991753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:16:11.991763 kernel: software IO TLB: area num 2. Mar 7 01:16:11.991778 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Mar 7 01:16:11.991788 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:16:11.991799 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:16:11.991810 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:16:11.991821 kernel: Dynamic Preempt: voluntary Mar 7 01:16:11.991832 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:16:11.991843 kernel: rcu: RCU event tracing is enabled. Mar 7 01:16:11.991855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:16:11.991866 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:16:11.991879 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:16:11.991890 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:16:11.991901 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:16:11.991912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:16:11.991923 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:16:11.991933 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:16:11.991944 kernel: Console: colour VGA+ 80x25 Mar 7 01:16:11.991955 kernel: printk: console [tty0] enabled Mar 7 01:16:11.991965 kernel: printk: console [ttyS0] enabled Mar 7 01:16:11.991979 kernel: ACPI: Core revision 20230628 Mar 7 01:16:11.991990 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:16:11.992001 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:16:11.992012 kernel: x2apic enabled Mar 7 01:16:11.992033 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:16:11.992047 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:16:11.992060 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:16:11.992071 kernel: kvm-guest: setup PV IPIs Mar 7 01:16:11.992083 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:16:11.992096 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:16:11.992104 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Mar 7 01:16:11.992112 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:16:11.992121 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:16:11.992128 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:16:11.992135 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:16:11.992142 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:16:11.992149 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:16:11.992158 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:16:11.992165 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:16:11.992172 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:16:11.992178 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:16:11.992186 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:16:11.992193 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:16:11.992224 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:16:11.992231 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:16:11.992240 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:16:11.992247 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:16:11.992254 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:16:11.992261 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:16:11.992268 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:16:11.992274 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:16:11.992281 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:16:11.992288 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:16:11.992295 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:16:11.992304 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:16:11.992311 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:16:11.992317 kernel: landlock: Up and running. Mar 7 01:16:11.992324 kernel: SELinux: Initializing. Mar 7 01:16:11.992331 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:16:11.992338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:16:11.992344 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:16:11.992351 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:16:11.992358 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:16:11.992368 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:16:11.992374 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:16:11.992381 kernel: ... version: 0 Mar 7 01:16:11.992388 kernel: ... bit width: 48 Mar 7 01:16:11.992394 kernel: ... generic registers: 6 Mar 7 01:16:11.992401 kernel: ... value mask: 0000ffffffffffff Mar 7 01:16:11.992408 kernel: ... max period: 00007fffffffffff Mar 7 01:16:11.992416 kernel: ... fixed-purpose events: 0 Mar 7 01:16:11.992427 kernel: ... event mask: 000000000000003f Mar 7 01:16:11.992442 kernel: signal: max sigframe size: 3376 Mar 7 01:16:11.992453 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:16:11.992464 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:16:11.992476 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:16:11.992487 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:16:11.992498 kernel: .... node #0, CPUs: #1 Mar 7 01:16:11.992510 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:16:11.992521 kernel: smpboot: Max logical packages: 1 Mar 7 01:16:11.992532 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 7 01:16:11.992546 kernel: devtmpfs: initialized Mar 7 01:16:11.992558 kernel: x86/mm: Memory block size: 128MB Mar 7 01:16:11.992747 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:16:11.992759 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:16:11.992770 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:16:11.992782 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:16:11.992793 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:16:11.992805 kernel: audit: type=2000 audit(1772846170.531:1): state=initialized audit_enabled=0 res=1 Mar 7 01:16:11.992816 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:16:11.992830 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:16:11.992842 kernel: cpuidle: using governor menu Mar 7 01:16:11.992853 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:16:11.992863 kernel: dca service started, version 1.12.1 Mar 7 01:16:11.992870 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:16:11.992876 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:16:11.992883 kernel: PCI: Using configuration type 1 for base access Mar 7 01:16:11.992890 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:16:11.992897 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:16:11.992907 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:16:11.992914 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:16:11.992921 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:16:11.992927 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:16:11.992934 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:16:11.992941 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:16:11.992947 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:16:11.992954 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:16:11.992961 kernel: ACPI: Interpreter enabled Mar 7 01:16:11.992970 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:16:11.992977 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:16:11.992984 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:16:11.992990 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:16:11.992997 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:16:11.993004 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:16:11.993189 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:16:11.996800 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:16:11.996942 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:16:11.996953 kernel: PCI host bridge to bus 0000:00 Mar 7 01:16:11.997081 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:16:11.997214 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:16:11.997543 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:16:11.997659 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:16:11.997772 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:16:11.997891 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:16:11.998003 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:16:11.998153 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:16:12.000334 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:16:12.000470 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:16:12.000596 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:16:12.000727 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:16:12.000850 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:16:12.000985 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:16:12.001111 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:16:12.001299 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:16:12.001427 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:16:12.001561 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:16:12.001691 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:16:12.001815 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:16:12.001938 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:16:12.002059 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:16:12.002191 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:16:12.002931 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:16:12.003069 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:16:12.003220 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:16:12.003351 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:16:12.003486 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:16:12.003806 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:16:12.003816 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:16:12.003823 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:16:12.003830 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:16:12.003841 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:16:12.003848 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:16:12.003855 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:16:12.003862 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:16:12.003869 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:16:12.003876 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:16:12.003883 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:16:12.003890 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:16:12.003897 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:16:12.003907 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:16:12.003914 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:16:12.003920 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:16:12.003927 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:16:12.003934 kernel: iommu: Default domain type: Translated Mar 7 01:16:12.003941 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:16:12.003948 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:16:12.003955 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:16:12.003961 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:16:12.003971 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:16:12.004095 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:16:12.009422 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:16:12.009565 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:16:12.009576 kernel: vgaarb: loaded Mar 7 01:16:12.009584 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:16:12.009591 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:16:12.009598 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:16:12.009610 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:16:12.009617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:16:12.009623 kernel: pnp: PnP ACPI init Mar 7 01:16:12.009766 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:16:12.009777 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:16:12.009784 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:16:12.009792 kernel: NET: Registered PF_INET protocol family Mar 7 01:16:12.009799 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:16:12.009809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:16:12.009816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:16:12.009823 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:16:12.009830 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:16:12.009836 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:16:12.009843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:16:12.009850 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:16:12.009857 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:16:12.009864 kernel: NET: Registered PF_XDP protocol family Mar 7 01:16:12.010009 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:16:12.010127 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:16:12.011432 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:16:12.011551 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:16:12.011665 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:16:12.011778 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:16:12.011788 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:16:12.011795 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:16:12.011806 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:16:12.011813 kernel: Initialise system trusted keyrings Mar 7 01:16:12.011820 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:16:12.011827 kernel: Key type asymmetric registered Mar 7 01:16:12.011834 kernel: Asymmetric key parser 'x509' registered Mar 7 01:16:12.011841 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:16:12.011848 kernel: io scheduler mq-deadline registered Mar 7 01:16:12.011855 kernel: io scheduler kyber registered Mar 7 01:16:12.011862 kernel: io scheduler bfq registered Mar 7 01:16:12.011868 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:16:12.011878 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:16:12.011886 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:16:12.011893 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:16:12.011900 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:16:12.011907 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:16:12.011914 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:16:12.011921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:16:12.012048 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:16:12.012063 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:16:12.012182 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:16:12.014961 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:16:11 UTC (1772846171) Mar 7 01:16:12.015083 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:16:12.015093 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:16:12.015100 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:16:12.015108 kernel: Segment Routing with IPv6 Mar 7 01:16:12.015115 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:16:12.015126 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:16:12.015133 kernel: Key type dns_resolver registered Mar 7 01:16:12.015140 kernel: IPI shorthand broadcast: enabled Mar 7 01:16:12.015148 kernel: sched_clock: Marking stable (937003460, 333041742)->(1425594152, -155548950) Mar 7 01:16:12.015155 kernel: registered taskstats version 1 Mar 7 01:16:12.015162 kernel: Loading compiled-in X.509 certificates Mar 7 01:16:12.015169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:16:12.015176 kernel: Key type .fscrypt registered Mar 7 01:16:12.015183 kernel: Key type fscrypt-provisioning registered Mar 7 01:16:12.015193 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:16:12.015214 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:16:12.015221 kernel: ima: No architecture policies found Mar 7 01:16:12.015229 kernel: clk: Disabling unused clocks Mar 7 01:16:12.015236 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:16:12.015243 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:16:12.015250 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:16:12.015257 kernel: Run /init as init process Mar 7 01:16:12.015265 kernel: with arguments: Mar 7 01:16:12.015275 kernel: /init Mar 7 01:16:12.015282 kernel: with environment: Mar 7 01:16:12.015289 kernel: HOME=/ Mar 7 01:16:12.015296 kernel: TERM=linux Mar 7 01:16:12.015305 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:16:12.015314 systemd[1]: Detected virtualization kvm. Mar 7 01:16:12.015322 systemd[1]: Detected architecture x86-64. Mar 7 01:16:12.015329 systemd[1]: Running in initrd. Mar 7 01:16:12.015340 systemd[1]: No hostname configured, using default hostname. Mar 7 01:16:12.015347 systemd[1]: Hostname set to . Mar 7 01:16:12.015355 systemd[1]: Initializing machine ID from random generator. Mar 7 01:16:12.015362 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:16:12.015554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:16:12.015577 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:16:12.015590 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:16:12.015598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:16:12.015606 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:16:12.015614 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:16:12.015623 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:16:12.015631 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:16:12.015641 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:16:12.015649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:16:12.015657 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:16:12.015665 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:16:12.015673 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:16:12.015680 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:16:12.015688 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:16:12.015696 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:16:12.015704 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:16:12.015714 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:16:12.015722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:16:12.015730 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:16:12.015738 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:16:12.015746 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:16:12.015754 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:16:12.015762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:16:12.015770 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:16:12.015777 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:16:12.015788 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:16:12.015796 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:16:12.015823 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:16:12.015843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:16:12.015852 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:16:12.015862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:16:12.015870 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:16:12.015881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:16:12.015889 systemd-journald[178]: Journal started Mar 7 01:16:12.015906 systemd-journald[178]: Runtime Journal (/run/log/journal/f98a1b05b9dc470797295405286e3acf) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:16:12.024345 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:16:12.023887 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:16:12.116195 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:16:12.116245 kernel: Bridge firewalling registered Mar 7 01:16:12.054947 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:16:12.116239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:16:12.117762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:12.119684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:16:12.127361 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:16:12.130335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:16:12.133354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:16:12.145348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:16:12.176803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:16:12.179436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:12.180772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:16:12.182875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:16:12.189335 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:16:12.193323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:16:12.206239 dracut-cmdline[213]: dracut-dracut-053 Mar 7 01:16:12.211123 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:16:12.226835 systemd-resolved[216]: Positive Trust Anchors: Mar 7 01:16:12.227351 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:16:12.227379 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:16:12.231414 systemd-resolved[216]: Defaulting to hostname 'linux'. Mar 7 01:16:12.232702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:16:12.236466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:16:12.290224 kernel: SCSI subsystem initialized Mar 7 01:16:12.301227 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:16:12.313226 kernel: iscsi: registered transport (tcp) Mar 7 01:16:12.334284 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:16:12.334325 kernel: QLogic iSCSI HBA Driver Mar 7 01:16:12.378088 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:16:12.384337 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:16:12.414426 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:16:12.414464 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:16:12.415337 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:16:12.461232 kernel: raid6: avx2x4 gen() 29462 MB/s Mar 7 01:16:12.479227 kernel: raid6: avx2x2 gen() 27339 MB/s Mar 7 01:16:12.497521 kernel: raid6: avx2x1 gen() 24162 MB/s Mar 7 01:16:12.497539 kernel: raid6: using algorithm avx2x4 gen() 29462 MB/s Mar 7 01:16:12.520392 kernel: raid6: .... xor() 4878 MB/s, rmw enabled Mar 7 01:16:12.520414 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:16:12.543231 kernel: xor: automatically using best checksumming function avx Mar 7 01:16:12.670250 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:16:12.682516 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:16:12.687326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:16:12.713018 systemd-udevd[399]: Using default interface naming scheme 'v255'. Mar 7 01:16:12.717917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:16:12.727690 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:16:12.743598 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Mar 7 01:16:12.776181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:16:12.782344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:16:12.856141 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:16:12.864346 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:16:12.882983 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:16:12.887259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:16:12.889500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:16:12.891396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:16:12.898357 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:16:12.914398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:16:12.935616 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:16:12.940414 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:16:12.954248 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:16:12.974303 kernel: libata version 3.00 loaded. Mar 7 01:16:12.987212 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:16:12.987429 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:16:12.995474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:16:12.999580 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:16:12.999762 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:16:12.996352 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:13.190384 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:16:13.190415 kernel: AES CTR mode by8 optimization enabled Mar 7 01:16:13.189660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:16:13.198283 kernel: scsi host1: ahci Mar 7 01:16:13.198481 kernel: scsi host2: ahci Mar 7 01:16:13.198636 kernel: scsi host3: ahci Mar 7 01:16:13.190844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:16:13.191050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:13.197492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:16:13.207492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:16:13.211222 kernel: scsi host4: ahci Mar 7 01:16:13.244152 kernel: scsi host5: ahci Mar 7 01:16:13.258542 kernel: scsi host6: ahci Mar 7 01:16:13.258751 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 Mar 7 01:16:13.258765 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 Mar 7 01:16:13.258775 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 Mar 7 01:16:13.258786 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 Mar 7 01:16:13.258796 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 Mar 7 01:16:13.258806 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 Mar 7 01:16:13.267217 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:16:13.267509 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:16:13.267860 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:16:13.268189 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:16:13.268727 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:16:13.269470 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:16:13.269483 kernel: GPT:9289727 != 167739391 Mar 7 01:16:13.269499 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:16:13.269508 kernel: GPT:9289727 != 167739391 Mar 7 01:16:13.269517 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:16:13.269527 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:13.269537 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:16:13.393094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:13.403384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:16:13.426842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:13.569167 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.569259 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.572223 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.572251 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.577384 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.578221 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.617239 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (450) Mar 7 01:16:13.621367 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (451) Mar 7 01:16:13.620433 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:16:13.633273 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:16:13.641930 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:16:13.644232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:16:13.649700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:16:13.656341 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:16:13.661890 disk-uuid[569]: Primary Header is updated. Mar 7 01:16:13.661890 disk-uuid[569]: Secondary Entries is updated. Mar 7 01:16:13.661890 disk-uuid[569]: Secondary Header is updated. Mar 7 01:16:13.668226 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:13.675222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:14.679262 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:14.681047 disk-uuid[570]: The operation has completed successfully. Mar 7 01:16:14.737336 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:16:14.737464 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:16:14.754331 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:16:14.760404 sh[584]: Success Mar 7 01:16:14.774291 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:16:14.827083 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:16:14.838308 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:16:14.839732 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:16:14.860813 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:16:14.860842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:14.864233 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:16:14.867570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:16:14.872063 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:16:14.880232 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:16:14.882822 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:16:14.885056 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:16:14.909342 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:16:14.914336 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:16:14.929344 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:14.929570 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:14.932379 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:16:14.943434 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:16:14.943458 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:16:14.962508 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:14.962248 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:16:14.970393 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:16:14.979735 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:16:15.052845 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:16:15.063715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:16:15.073510 ignition[698]: Ignition 2.19.0 Mar 7 01:16:15.074415 ignition[698]: Stage: fetch-offline Mar 7 01:16:15.074466 ignition[698]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:15.074477 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:15.074554 ignition[698]: parsed url from cmdline: "" Mar 7 01:16:15.074559 ignition[698]: no config URL provided Mar 7 01:16:15.074564 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:16:15.074574 ignition[698]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:16:15.080145 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:16:15.074580 ignition[698]: failed to fetch config: resource requires networking Mar 7 01:16:15.074732 ignition[698]: Ignition finished successfully Mar 7 01:16:15.098276 systemd-networkd[770]: lo: Link UP Mar 7 01:16:15.098284 systemd-networkd[770]: lo: Gained carrier Mar 7 01:16:15.100127 systemd-networkd[770]: Enumeration completed Mar 7 01:16:15.100233 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:16:15.101060 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:16:15.101065 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:16:15.103018 systemd-networkd[770]: eth0: Link UP Mar 7 01:16:15.103023 systemd-networkd[770]: eth0: Gained carrier Mar 7 01:16:15.103031 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:16:15.103497 systemd[1]: Reached target network.target - Network. Mar 7 01:16:15.110363 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:16:15.130718 ignition[773]: Ignition 2.19.0 Mar 7 01:16:15.130733 ignition[773]: Stage: fetch Mar 7 01:16:15.130933 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:15.130948 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:15.131064 ignition[773]: parsed url from cmdline: "" Mar 7 01:16:15.131071 ignition[773]: no config URL provided Mar 7 01:16:15.131079 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:16:15.131091 ignition[773]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:16:15.131115 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:16:15.131300 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:16:15.331480 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:16:15.331898 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:16:15.732175 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:16:15.732348 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:16:15.890281 systemd-networkd[770]: eth0: DHCPv4 address 172.236.123.47/24, gateway 172.236.123.1 acquired from 23.213.15.224 Mar 7 01:16:16.173356 systemd-networkd[770]: eth0: Gained IPv6LL Mar 7 01:16:16.533139 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:16:16.631230 ignition[773]: PUT result: OK Mar 7 01:16:16.631288 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:16:16.745529 ignition[773]: GET result: OK Mar 7 01:16:16.745875 ignition[773]: parsing config with SHA512: 240a079ea1fe952544043133194fb2400f3b89bc56ccd6469c9c69233d5adb45fc961e8ea16e5bbd7f0ba902982e2e1698c48d585b5dc36badfc74f91cb82834 Mar 7 01:16:16.750598 unknown[773]: fetched base config from "system" Mar 7 01:16:16.751374 ignition[773]: fetch: fetch complete Mar 7 01:16:16.750613 unknown[773]: fetched base config from "system" Mar 7 01:16:16.751380 ignition[773]: fetch: fetch passed Mar 7 01:16:16.750620 unknown[773]: fetched user config from "akamai" Mar 7 01:16:16.751623 ignition[773]: Ignition finished successfully Mar 7 01:16:16.756500 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:16:16.762316 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:16:16.778938 ignition[780]: Ignition 2.19.0 Mar 7 01:16:16.778951 ignition[780]: Stage: kargs Mar 7 01:16:16.779095 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:16.781986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:16:16.779107 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:16.779923 ignition[780]: kargs: kargs passed Mar 7 01:16:16.779991 ignition[780]: Ignition finished successfully Mar 7 01:16:16.789332 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:16:16.803106 ignition[787]: Ignition 2.19.0 Mar 7 01:16:16.803119 ignition[787]: Stage: disks Mar 7 01:16:16.803285 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:16.805732 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:16:16.803296 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:16.829878 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:16:16.804326 ignition[787]: disks: disks passed Mar 7 01:16:16.830901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:16:16.804366 ignition[787]: Ignition finished successfully Mar 7 01:16:16.832538 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:16:16.834257 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:16:16.835776 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:16:16.842372 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:16:16.861001 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:16:16.864460 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:16:16.871277 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:16:16.957220 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:16:16.957972 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:16:16.959383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:16:16.965269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:16:16.968291 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:16:16.970549 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:16:16.971295 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:16:16.971326 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:16:16.985012 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Mar 7 01:16:16.985043 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:16.985380 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:16:16.996164 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:16.996179 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:16:17.002768 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:16:17.008331 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:16:17.008356 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:16:17.008690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:16:17.057858 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:16:17.064881 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:16:17.070633 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:16:17.076805 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:16:17.183304 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:16:17.193310 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:16:17.198064 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:16:17.205641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:16:17.210034 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:17.239537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:16:17.241310 ignition[917]: INFO : Ignition 2.19.0 Mar 7 01:16:17.241310 ignition[917]: INFO : Stage: mount Mar 7 01:16:17.241310 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:17.241310 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:17.241310 ignition[917]: INFO : mount: mount passed Mar 7 01:16:17.241310 ignition[917]: INFO : Ignition finished successfully Mar 7 01:16:17.242816 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:16:17.251297 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:16:17.963336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:16:17.979224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (928) Mar 7 01:16:17.984259 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:17.984285 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:17.989071 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:16:17.996293 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:16:17.996316 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:16:17.999066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:16:18.028836 ignition[945]: INFO : Ignition 2.19.0 Mar 7 01:16:18.031146 ignition[945]: INFO : Stage: files Mar 7 01:16:18.031146 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:18.031146 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:18.031146 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:16:18.035272 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:16:18.035272 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:16:18.037533 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:16:18.037533 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:16:18.037533 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:16:18.037142 unknown[945]: wrote ssh authorized keys file for user: core Mar 7 01:16:18.041718 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:16:18.041718 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:16:18.349929 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:16:18.402929 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:16:19.055645 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:16:19.487898 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:19.487898 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:16:19.521838 ignition[945]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:16:19.521838 ignition[945]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:16:19.521838 ignition[945]: INFO : files: files passed Mar 7 01:16:19.521838 ignition[945]: INFO : Ignition finished successfully Mar 7 01:16:19.492894 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:16:19.526455 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:16:19.538343 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:16:19.540864 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:16:19.541864 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:16:19.560839 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:16:19.562415 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:16:19.564259 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:16:19.565100 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:16:19.566317 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:16:19.573357 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:16:19.610619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:16:19.610735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:16:19.612274 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:16:19.613526 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:16:19.615763 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:16:19.620368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:16:19.635007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:16:19.647325 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:16:19.658777 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:16:19.659866 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:16:19.662053 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:16:19.663783 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:16:19.663931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:16:19.665812 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:16:19.667055 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:16:19.668769 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:16:19.670295 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:16:19.671780 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:16:19.673550 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:16:19.675320 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:16:19.676986 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:16:19.678847 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:16:19.680498 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:16:19.682274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:16:19.682428 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:16:19.684382 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:16:19.685641 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:16:19.687016 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:16:19.687131 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:16:19.688813 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:16:19.688914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:16:19.691096 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:16:19.691264 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:16:19.692336 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:16:19.692475 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:16:19.700364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:16:19.704242 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:16:19.704985 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:16:19.705104 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:16:19.710331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:16:19.710435 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:16:19.723820 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:16:19.723959 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:16:19.729682 ignition[997]: INFO : Ignition 2.19.0 Mar 7 01:16:19.729682 ignition[997]: INFO : Stage: umount Mar 7 01:16:19.729682 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:19.729682 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:19.729682 ignition[997]: INFO : umount: umount passed Mar 7 01:16:19.729682 ignition[997]: INFO : Ignition finished successfully Mar 7 01:16:19.729767 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:16:19.729891 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:16:19.738827 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:16:19.738888 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:16:19.741923 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:16:19.741976 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:16:19.742854 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:16:19.742905 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:16:19.744929 systemd[1]: Stopped target network.target - Network. Mar 7 01:16:19.746237 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:16:19.746293 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:16:19.747129 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:16:19.749347 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:16:19.753715 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:16:19.755001 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:16:19.779086 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:16:19.781271 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:16:19.781338 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:16:19.782698 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:16:19.782748 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:16:19.784277 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:16:19.784332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:16:19.785978 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:16:19.786028 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:16:19.787578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:16:19.789083 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:16:19.791809 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:16:19.792238 systemd-networkd[770]: eth0: DHCPv6 lease lost Mar 7 01:16:19.793868 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:16:19.793976 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:16:19.795427 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:16:19.795742 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:16:19.800431 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:16:19.800498 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:16:19.801978 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:16:19.802035 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:16:19.811338 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:16:19.812176 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:16:19.812251 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:16:19.815589 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:16:19.820383 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:16:19.820507 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:16:19.829062 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:16:19.829157 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:16:19.832847 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:16:19.832908 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:16:19.834671 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:16:19.834720 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:16:19.838217 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:16:19.838393 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:16:19.840191 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:16:19.840356 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:16:19.842851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:16:19.842916 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:16:19.844498 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:16:19.844536 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:16:19.845943 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:16:19.845997 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:16:19.848031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:16:19.848081 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:16:19.849848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:16:19.849897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:19.858374 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:16:19.859131 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:16:19.859187 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:16:19.864783 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:16:19.864850 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:16:19.865957 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:16:19.866023 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:16:19.868345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:16:19.868396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:19.870330 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:16:19.870445 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:16:19.872087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:16:19.878356 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:16:19.888273 systemd[1]: Switching root. Mar 7 01:16:19.926775 systemd-journald[178]: Journal stopped Mar 7 01:16:11.989821 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:16:11.989841 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:16:11.989850 kernel: BIOS-provided physical RAM map: Mar 7 01:16:11.989856 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 7 01:16:11.989861 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 7 01:16:11.989870 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:16:11.989876 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 7 01:16:11.989882 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 7 01:16:11.989888 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:16:11.989894 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:16:11.989900 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:16:11.989905 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:16:11.989911 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 7 01:16:11.989919 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:16:11.989926 kernel: NX (Execute Disable) protection: active Mar 7 01:16:11.989961 kernel: APIC: Static calls initialized Mar 7 01:16:11.989967 kernel: SMBIOS 2.8 present. Mar 7 01:16:11.989973 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 7 01:16:11.989979 kernel: Hypervisor detected: KVM Mar 7 01:16:11.989988 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:16:11.989994 kernel: kvm-clock: using sched offset of 6027654676 cycles Mar 7 01:16:11.990000 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:16:11.990007 kernel: tsc: Detected 1999.999 MHz processor Mar 7 01:16:11.990013 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:16:11.990020 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:16:11.990026 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 7 01:16:11.990032 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:16:11.990039 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:16:11.990047 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 7 01:16:11.990053 kernel: Using GB pages for direct mapping Mar 7 01:16:11.990060 kernel: ACPI: Early table checksum verification disabled Mar 7 01:16:11.990066 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 7 01:16:11.990072 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990078 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990085 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990091 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 7 01:16:11.990097 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990106 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990112 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990118 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:16:11.990128 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 7 01:16:11.990135 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 7 01:16:11.990141 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 7 01:16:11.990151 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 7 01:16:11.990157 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 7 01:16:11.990164 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 7 01:16:11.990170 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 7 01:16:11.990177 kernel: No NUMA configuration found Mar 7 01:16:11.990183 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 7 01:16:11.990190 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Mar 7 01:16:11.990196 kernel: Zone ranges: Mar 7 01:16:11.991282 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:16:11.991291 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:16:11.991298 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:16:11.991304 kernel: Movable zone start for each node Mar 7 01:16:11.991316 kernel: Early memory node ranges Mar 7 01:16:11.991327 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:16:11.991337 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 7 01:16:11.991348 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 7 01:16:11.991359 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 7 01:16:11.991370 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:16:11.991385 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:16:11.991395 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 7 01:16:11.991406 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:16:11.991417 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:16:11.991427 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:16:11.991437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:16:11.991448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:16:11.991459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:16:11.991470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:16:11.991484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:16:11.991495 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:16:11.991506 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:16:11.991517 kernel: TSC deadline timer available Mar 7 01:16:11.991528 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:16:11.991539 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:16:11.991550 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:16:11.991561 kernel: kvm-guest: setup PV sched yield Mar 7 01:16:11.991572 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:16:11.991587 kernel: Booting paravirtualized kernel on KVM Mar 7 01:16:11.991598 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:16:11.991608 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:16:11.991619 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:16:11.991630 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:16:11.991641 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:16:11.991651 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:16:11.991662 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:16:11.991674 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:16:11.991688 kernel: random: crng init done Mar 7 01:16:11.991699 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:16:11.991710 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:16:11.991720 kernel: Fallback order for Node 0: 0 Mar 7 01:16:11.991731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 7 01:16:11.991742 kernel: Policy zone: Normal Mar 7 01:16:11.991753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:16:11.991763 kernel: software IO TLB: area num 2. Mar 7 01:16:11.991778 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227292K reserved, 0K cma-reserved) Mar 7 01:16:11.991788 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:16:11.991799 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:16:11.991810 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:16:11.991821 kernel: Dynamic Preempt: voluntary Mar 7 01:16:11.991832 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:16:11.991843 kernel: rcu: RCU event tracing is enabled. Mar 7 01:16:11.991855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:16:11.991866 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:16:11.991879 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:16:11.991890 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:16:11.991901 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:16:11.991912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:16:11.991923 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:16:11.991933 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:16:11.991944 kernel: Console: colour VGA+ 80x25 Mar 7 01:16:11.991955 kernel: printk: console [tty0] enabled Mar 7 01:16:11.991965 kernel: printk: console [ttyS0] enabled Mar 7 01:16:11.991979 kernel: ACPI: Core revision 20230628 Mar 7 01:16:11.991990 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:16:11.992001 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:16:11.992012 kernel: x2apic enabled Mar 7 01:16:11.992033 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:16:11.992047 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:16:11.992060 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:16:11.992071 kernel: kvm-guest: setup PV IPIs Mar 7 01:16:11.992083 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:16:11.992096 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:16:11.992104 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Mar 7 01:16:11.992112 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:16:11.992121 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:16:11.992128 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:16:11.992135 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:16:11.992142 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:16:11.992149 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:16:11.992158 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 7 01:16:11.992165 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:16:11.992172 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:16:11.992178 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:16:11.992186 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:16:11.992193 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:16:11.992224 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:16:11.992231 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:16:11.992240 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:16:11.992247 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:16:11.992254 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:16:11.992261 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:16:11.992268 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:16:11.992274 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:16:11.992281 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 7 01:16:11.992288 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 7 01:16:11.992295 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:16:11.992304 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:16:11.992311 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:16:11.992317 kernel: landlock: Up and running. Mar 7 01:16:11.992324 kernel: SELinux: Initializing. Mar 7 01:16:11.992331 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:16:11.992338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:16:11.992344 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:16:11.992351 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:16:11.992358 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:16:11.992368 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:16:11.992374 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 7 01:16:11.992381 kernel: ... version: 0 Mar 7 01:16:11.992388 kernel: ... bit width: 48 Mar 7 01:16:11.992394 kernel: ... generic registers: 6 Mar 7 01:16:11.992401 kernel: ... value mask: 0000ffffffffffff Mar 7 01:16:11.992408 kernel: ... max period: 00007fffffffffff Mar 7 01:16:11.992416 kernel: ... fixed-purpose events: 0 Mar 7 01:16:11.992427 kernel: ... event mask: 000000000000003f Mar 7 01:16:11.992442 kernel: signal: max sigframe size: 3376 Mar 7 01:16:11.992453 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:16:11.992464 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:16:11.992476 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:16:11.992487 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:16:11.992498 kernel: .... node #0, CPUs: #1 Mar 7 01:16:11.992510 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:16:11.992521 kernel: smpboot: Max logical packages: 1 Mar 7 01:16:11.992532 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Mar 7 01:16:11.992546 kernel: devtmpfs: initialized Mar 7 01:16:11.992558 kernel: x86/mm: Memory block size: 128MB Mar 7 01:16:11.992747 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:16:11.992759 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:16:11.992770 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:16:11.992782 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:16:11.992793 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:16:11.992805 kernel: audit: type=2000 audit(1772846170.531:1): state=initialized audit_enabled=0 res=1 Mar 7 01:16:11.992816 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:16:11.992830 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:16:11.992842 kernel: cpuidle: using governor menu Mar 7 01:16:11.992853 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:16:11.992863 kernel: dca service started, version 1.12.1 Mar 7 01:16:11.992870 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:16:11.992876 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:16:11.992883 kernel: PCI: Using configuration type 1 for base access Mar 7 01:16:11.992890 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:16:11.992897 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:16:11.992907 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:16:11.992914 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:16:11.992921 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:16:11.992927 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:16:11.992934 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:16:11.992941 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:16:11.992947 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:16:11.992954 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:16:11.992961 kernel: ACPI: Interpreter enabled Mar 7 01:16:11.992970 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:16:11.992977 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:16:11.992984 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:16:11.992990 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:16:11.992997 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:16:11.993004 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:16:11.993189 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:16:11.996800 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:16:11.996942 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:16:11.996953 kernel: PCI host bridge to bus 0000:00 Mar 7 01:16:11.997081 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:16:11.997214 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:16:11.997543 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:16:11.997659 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 7 01:16:11.997772 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:16:11.997891 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 7 01:16:11.998003 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:16:11.998153 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:16:12.000334 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:16:12.000470 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:16:12.000596 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:16:12.000727 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:16:12.000850 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:16:12.000985 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Mar 7 01:16:12.001111 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:16:12.001299 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:16:12.001427 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:16:12.001561 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:16:12.001691 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:16:12.001815 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:16:12.001938 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:16:12.002059 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:16:12.002191 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:16:12.002931 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:16:12.003069 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:16:12.003220 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Mar 7 01:16:12.003351 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:16:12.003486 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:16:12.003806 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:16:12.003816 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:16:12.003823 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:16:12.003830 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:16:12.003841 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:16:12.003848 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:16:12.003855 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:16:12.003862 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:16:12.003869 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:16:12.003876 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:16:12.003883 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:16:12.003890 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:16:12.003897 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:16:12.003907 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:16:12.003914 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:16:12.003920 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:16:12.003927 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:16:12.003934 kernel: iommu: Default domain type: Translated Mar 7 01:16:12.003941 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:16:12.003948 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:16:12.003955 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:16:12.003961 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 7 01:16:12.003971 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 7 01:16:12.004095 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:16:12.009422 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:16:12.009565 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:16:12.009576 kernel: vgaarb: loaded Mar 7 01:16:12.009584 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:16:12.009591 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:16:12.009598 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:16:12.009610 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:16:12.009617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:16:12.009623 kernel: pnp: PnP ACPI init Mar 7 01:16:12.009766 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:16:12.009777 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:16:12.009784 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:16:12.009792 kernel: NET: Registered PF_INET protocol family Mar 7 01:16:12.009799 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:16:12.009809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:16:12.009816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:16:12.009823 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:16:12.009830 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:16:12.009836 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:16:12.009843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:16:12.009850 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:16:12.009857 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:16:12.009864 kernel: NET: Registered PF_XDP protocol family Mar 7 01:16:12.010009 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:16:12.010127 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:16:12.011432 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:16:12.011551 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 7 01:16:12.011665 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:16:12.011778 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 7 01:16:12.011788 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:16:12.011795 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:16:12.011806 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 7 01:16:12.011813 kernel: Initialise system trusted keyrings Mar 7 01:16:12.011820 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:16:12.011827 kernel: Key type asymmetric registered Mar 7 01:16:12.011834 kernel: Asymmetric key parser 'x509' registered Mar 7 01:16:12.011841 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:16:12.011848 kernel: io scheduler mq-deadline registered Mar 7 01:16:12.011855 kernel: io scheduler kyber registered Mar 7 01:16:12.011862 kernel: io scheduler bfq registered Mar 7 01:16:12.011868 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:16:12.011878 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:16:12.011886 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:16:12.011893 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:16:12.011900 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:16:12.011907 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:16:12.011914 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:16:12.011921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:16:12.012048 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 7 01:16:12.012063 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:16:12.012182 kernel: rtc_cmos 00:03: registered as rtc0 Mar 7 01:16:12.014961 kernel: rtc_cmos 00:03: setting system clock to 2026-03-07T01:16:11 UTC (1772846171) Mar 7 01:16:12.015083 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:16:12.015093 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:16:12.015100 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:16:12.015108 kernel: Segment Routing with IPv6 Mar 7 01:16:12.015115 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:16:12.015126 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:16:12.015133 kernel: Key type dns_resolver registered Mar 7 01:16:12.015140 kernel: IPI shorthand broadcast: enabled Mar 7 01:16:12.015148 kernel: sched_clock: Marking stable (937003460, 333041742)->(1425594152, -155548950) Mar 7 01:16:12.015155 kernel: registered taskstats version 1 Mar 7 01:16:12.015162 kernel: Loading compiled-in X.509 certificates Mar 7 01:16:12.015169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:16:12.015176 kernel: Key type .fscrypt registered Mar 7 01:16:12.015183 kernel: Key type fscrypt-provisioning registered Mar 7 01:16:12.015193 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:16:12.015214 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:16:12.015221 kernel: ima: No architecture policies found Mar 7 01:16:12.015229 kernel: clk: Disabling unused clocks Mar 7 01:16:12.015236 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:16:12.015243 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:16:12.015250 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:16:12.015257 kernel: Run /init as init process Mar 7 01:16:12.015265 kernel: with arguments: Mar 7 01:16:12.015275 kernel: /init Mar 7 01:16:12.015282 kernel: with environment: Mar 7 01:16:12.015289 kernel: HOME=/ Mar 7 01:16:12.015296 kernel: TERM=linux Mar 7 01:16:12.015305 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:16:12.015314 systemd[1]: Detected virtualization kvm. Mar 7 01:16:12.015322 systemd[1]: Detected architecture x86-64. Mar 7 01:16:12.015329 systemd[1]: Running in initrd. Mar 7 01:16:12.015340 systemd[1]: No hostname configured, using default hostname. Mar 7 01:16:12.015347 systemd[1]: Hostname set to . Mar 7 01:16:12.015355 systemd[1]: Initializing machine ID from random generator. Mar 7 01:16:12.015362 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:16:12.015554 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:16:12.015577 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:16:12.015590 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:16:12.015598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:16:12.015606 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:16:12.015614 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:16:12.015623 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:16:12.015631 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:16:12.015641 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:16:12.015649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:16:12.015657 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:16:12.015665 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:16:12.015673 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:16:12.015680 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:16:12.015688 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:16:12.015696 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:16:12.015704 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:16:12.015714 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:16:12.015722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:16:12.015730 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:16:12.015738 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:16:12.015746 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:16:12.015754 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:16:12.015762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:16:12.015770 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:16:12.015777 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:16:12.015788 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:16:12.015796 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:16:12.015823 systemd-journald[178]: Collecting audit messages is disabled. Mar 7 01:16:12.015843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:16:12.015852 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:16:12.015862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:16:12.015870 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:16:12.015881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:16:12.015889 systemd-journald[178]: Journal started Mar 7 01:16:12.015906 systemd-journald[178]: Runtime Journal (/run/log/journal/f98a1b05b9dc470797295405286e3acf) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:16:12.024345 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:16:12.023887 systemd-modules-load[179]: Inserted module 'overlay' Mar 7 01:16:12.116195 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:16:12.116245 kernel: Bridge firewalling registered Mar 7 01:16:12.054947 systemd-modules-load[179]: Inserted module 'br_netfilter' Mar 7 01:16:12.116239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:16:12.117762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:12.119684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:16:12.127361 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:16:12.130335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:16:12.133354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:16:12.145348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:16:12.176803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:16:12.179436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:12.180772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:16:12.182875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:16:12.189335 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:16:12.193323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:16:12.206239 dracut-cmdline[213]: dracut-dracut-053 Mar 7 01:16:12.211123 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:16:12.226835 systemd-resolved[216]: Positive Trust Anchors: Mar 7 01:16:12.227351 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:16:12.227379 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:16:12.231414 systemd-resolved[216]: Defaulting to hostname 'linux'. Mar 7 01:16:12.232702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:16:12.236466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:16:12.290224 kernel: SCSI subsystem initialized Mar 7 01:16:12.301227 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:16:12.313226 kernel: iscsi: registered transport (tcp) Mar 7 01:16:12.334284 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:16:12.334325 kernel: QLogic iSCSI HBA Driver Mar 7 01:16:12.378088 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:16:12.384337 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:16:12.414426 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:16:12.414464 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:16:12.415337 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:16:12.461232 kernel: raid6: avx2x4 gen() 29462 MB/s Mar 7 01:16:12.479227 kernel: raid6: avx2x2 gen() 27339 MB/s Mar 7 01:16:12.497521 kernel: raid6: avx2x1 gen() 24162 MB/s Mar 7 01:16:12.497539 kernel: raid6: using algorithm avx2x4 gen() 29462 MB/s Mar 7 01:16:12.520392 kernel: raid6: .... xor() 4878 MB/s, rmw enabled Mar 7 01:16:12.520414 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:16:12.543231 kernel: xor: automatically using best checksumming function avx Mar 7 01:16:12.670250 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:16:12.682516 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:16:12.687326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:16:12.713018 systemd-udevd[399]: Using default interface naming scheme 'v255'. Mar 7 01:16:12.717917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:16:12.727690 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:16:12.743598 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Mar 7 01:16:12.776181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:16:12.782344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:16:12.856141 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:16:12.864346 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:16:12.882983 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:16:12.887259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:16:12.889500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:16:12.891396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:16:12.898357 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:16:12.914398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:16:12.935616 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:16:12.940414 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 01:16:12.954248 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:16:12.974303 kernel: libata version 3.00 loaded. Mar 7 01:16:12.987212 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:16:12.987429 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:16:12.995474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:16:12.999580 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:16:12.999762 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:16:12.996352 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:13.190384 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:16:13.190415 kernel: AES CTR mode by8 optimization enabled Mar 7 01:16:13.189660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:16:13.198283 kernel: scsi host1: ahci Mar 7 01:16:13.198481 kernel: scsi host2: ahci Mar 7 01:16:13.198636 kernel: scsi host3: ahci Mar 7 01:16:13.190844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:16:13.191050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:13.197492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:16:13.207492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:16:13.211222 kernel: scsi host4: ahci Mar 7 01:16:13.244152 kernel: scsi host5: ahci Mar 7 01:16:13.258542 kernel: scsi host6: ahci Mar 7 01:16:13.258751 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 Mar 7 01:16:13.258765 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 Mar 7 01:16:13.258775 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 Mar 7 01:16:13.258786 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 Mar 7 01:16:13.258796 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 Mar 7 01:16:13.258806 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 Mar 7 01:16:13.267217 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 7 01:16:13.267509 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 7 01:16:13.267860 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:16:13.268189 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 7 01:16:13.268727 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:16:13.269470 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:16:13.269483 kernel: GPT:9289727 != 167739391 Mar 7 01:16:13.269499 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:16:13.269508 kernel: GPT:9289727 != 167739391 Mar 7 01:16:13.269517 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:16:13.269527 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:13.269537 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:16:13.393094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:13.403384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:16:13.426842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:13.569167 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.569259 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.572223 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.572251 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.577384 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.578221 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:16:13.617239 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (450) Mar 7 01:16:13.621367 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (451) Mar 7 01:16:13.620433 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 01:16:13.633273 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 01:16:13.641930 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 01:16:13.644232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 01:16:13.649700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:16:13.656341 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:16:13.661890 disk-uuid[569]: Primary Header is updated. Mar 7 01:16:13.661890 disk-uuid[569]: Secondary Entries is updated. Mar 7 01:16:13.661890 disk-uuid[569]: Secondary Header is updated. Mar 7 01:16:13.668226 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:13.675222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:14.679262 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:16:14.681047 disk-uuid[570]: The operation has completed successfully. Mar 7 01:16:14.737336 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:16:14.737464 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:16:14.754331 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:16:14.760404 sh[584]: Success Mar 7 01:16:14.774291 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:16:14.827083 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:16:14.838308 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:16:14.839732 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:16:14.860813 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:16:14.860842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:14.864233 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:16:14.867570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:16:14.872063 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:16:14.880232 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:16:14.882822 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:16:14.885056 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:16:14.909342 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:16:14.914336 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:16:14.929344 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:14.929570 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:14.932379 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:16:14.943434 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:16:14.943458 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:16:14.962508 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:14.962248 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:16:14.970393 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:16:14.979735 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:16:15.052845 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:16:15.063715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:16:15.073510 ignition[698]: Ignition 2.19.0 Mar 7 01:16:15.074415 ignition[698]: Stage: fetch-offline Mar 7 01:16:15.074466 ignition[698]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:15.074477 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:15.074554 ignition[698]: parsed url from cmdline: "" Mar 7 01:16:15.074559 ignition[698]: no config URL provided Mar 7 01:16:15.074564 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:16:15.074574 ignition[698]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:16:15.080145 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:16:15.074580 ignition[698]: failed to fetch config: resource requires networking Mar 7 01:16:15.074732 ignition[698]: Ignition finished successfully Mar 7 01:16:15.098276 systemd-networkd[770]: lo: Link UP Mar 7 01:16:15.098284 systemd-networkd[770]: lo: Gained carrier Mar 7 01:16:15.100127 systemd-networkd[770]: Enumeration completed Mar 7 01:16:15.100233 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:16:15.101060 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:16:15.101065 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:16:15.103018 systemd-networkd[770]: eth0: Link UP Mar 7 01:16:15.103023 systemd-networkd[770]: eth0: Gained carrier Mar 7 01:16:15.103031 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:16:15.103497 systemd[1]: Reached target network.target - Network. Mar 7 01:16:15.110363 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:16:15.130718 ignition[773]: Ignition 2.19.0 Mar 7 01:16:15.130733 ignition[773]: Stage: fetch Mar 7 01:16:15.130933 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:15.130948 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:15.131064 ignition[773]: parsed url from cmdline: "" Mar 7 01:16:15.131071 ignition[773]: no config URL provided Mar 7 01:16:15.131079 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:16:15.131091 ignition[773]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:16:15.131115 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 7 01:16:15.131300 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:16:15.331480 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 7 01:16:15.331898 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:16:15.732175 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 7 01:16:15.732348 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 01:16:15.890281 systemd-networkd[770]: eth0: DHCPv4 address 172.236.123.47/24, gateway 172.236.123.1 acquired from 23.213.15.224 Mar 7 01:16:16.173356 systemd-networkd[770]: eth0: Gained IPv6LL Mar 7 01:16:16.533139 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 7 01:16:16.631230 ignition[773]: PUT result: OK Mar 7 01:16:16.631288 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 7 01:16:16.745529 ignition[773]: GET result: OK Mar 7 01:16:16.745875 ignition[773]: parsing config with SHA512: 240a079ea1fe952544043133194fb2400f3b89bc56ccd6469c9c69233d5adb45fc961e8ea16e5bbd7f0ba902982e2e1698c48d585b5dc36badfc74f91cb82834 Mar 7 01:16:16.750598 unknown[773]: fetched base config from "system" Mar 7 01:16:16.751374 ignition[773]: fetch: fetch complete Mar 7 01:16:16.750613 unknown[773]: fetched base config from "system" Mar 7 01:16:16.751380 ignition[773]: fetch: fetch passed Mar 7 01:16:16.750620 unknown[773]: fetched user config from "akamai" Mar 7 01:16:16.751623 ignition[773]: Ignition finished successfully Mar 7 01:16:16.756500 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:16:16.762316 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:16:16.778938 ignition[780]: Ignition 2.19.0 Mar 7 01:16:16.778951 ignition[780]: Stage: kargs Mar 7 01:16:16.779095 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:16.781986 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:16:16.779107 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:16.779923 ignition[780]: kargs: kargs passed Mar 7 01:16:16.779991 ignition[780]: Ignition finished successfully Mar 7 01:16:16.789332 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:16:16.803106 ignition[787]: Ignition 2.19.0 Mar 7 01:16:16.803119 ignition[787]: Stage: disks Mar 7 01:16:16.803285 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:16.805732 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:16:16.803296 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:16.829878 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:16:16.804326 ignition[787]: disks: disks passed Mar 7 01:16:16.830901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:16:16.804366 ignition[787]: Ignition finished successfully Mar 7 01:16:16.832538 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:16:16.834257 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:16:16.835776 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:16:16.842372 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:16:16.861001 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:16:16.864460 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:16:16.871277 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:16:16.957220 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:16:16.957972 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:16:16.959383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:16:16.965269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:16:16.968291 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:16:16.970549 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:16:16.971295 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:16:16.971326 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:16:16.985012 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Mar 7 01:16:16.985043 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:16.985380 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:16:16.996164 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:16.996179 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:16:17.002768 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:16:17.008331 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:16:17.008356 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:16:17.008690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:16:17.057858 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:16:17.064881 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:16:17.070633 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:16:17.076805 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:16:17.183304 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:16:17.193310 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:16:17.198064 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:16:17.205641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:16:17.210034 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:17.239537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:16:17.241310 ignition[917]: INFO : Ignition 2.19.0 Mar 7 01:16:17.241310 ignition[917]: INFO : Stage: mount Mar 7 01:16:17.241310 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:17.241310 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:17.241310 ignition[917]: INFO : mount: mount passed Mar 7 01:16:17.241310 ignition[917]: INFO : Ignition finished successfully Mar 7 01:16:17.242816 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:16:17.251297 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:16:17.963336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:16:17.979224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (928) Mar 7 01:16:17.984259 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:16:17.984285 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:16:17.989071 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:16:17.996293 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:16:17.996316 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:16:17.999066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:16:18.028836 ignition[945]: INFO : Ignition 2.19.0 Mar 7 01:16:18.031146 ignition[945]: INFO : Stage: files Mar 7 01:16:18.031146 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:18.031146 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:18.031146 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:16:18.035272 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:16:18.035272 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:16:18.037533 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:16:18.037533 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:16:18.037533 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:16:18.037142 unknown[945]: wrote ssh authorized keys file for user: core Mar 7 01:16:18.041718 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:16:18.041718 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:16:18.349929 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:16:18.402929 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:18.404936 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:16:19.055645 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:16:19.487898 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:16:19.487898 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:16:19.490491 ignition[945]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:16:19.521838 ignition[945]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:16:19.521838 ignition[945]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:16:19.521838 ignition[945]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:16:19.521838 ignition[945]: INFO : files: files passed Mar 7 01:16:19.521838 ignition[945]: INFO : Ignition finished successfully Mar 7 01:16:19.492894 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:16:19.526455 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:16:19.538343 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:16:19.540864 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:16:19.541864 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:16:19.560839 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:16:19.562415 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:16:19.564259 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:16:19.565100 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:16:19.566317 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:16:19.573357 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:16:19.610619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:16:19.610735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:16:19.612274 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:16:19.613526 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:16:19.615763 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:16:19.620368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:16:19.635007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:16:19.647325 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:16:19.658777 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:16:19.659866 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:16:19.662053 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:16:19.663783 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:16:19.663931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:16:19.665812 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:16:19.667055 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:16:19.668769 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:16:19.670295 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:16:19.671780 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:16:19.673550 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:16:19.675320 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:16:19.676986 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:16:19.678847 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:16:19.680498 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:16:19.682274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:16:19.682428 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:16:19.684382 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:16:19.685641 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:16:19.687016 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:16:19.687131 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:16:19.688813 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:16:19.688914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:16:19.691096 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:16:19.691264 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:16:19.692336 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:16:19.692475 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:16:19.700364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:16:19.704242 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:16:19.704985 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:16:19.705104 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:16:19.710331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:16:19.710435 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:16:19.723820 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:16:19.723959 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:16:19.729682 ignition[997]: INFO : Ignition 2.19.0 Mar 7 01:16:19.729682 ignition[997]: INFO : Stage: umount Mar 7 01:16:19.729682 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:16:19.729682 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 7 01:16:19.729682 ignition[997]: INFO : umount: umount passed Mar 7 01:16:19.729682 ignition[997]: INFO : Ignition finished successfully Mar 7 01:16:19.729767 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:16:19.729891 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:16:19.738827 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:16:19.738888 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:16:19.741923 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:16:19.741976 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:16:19.742854 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:16:19.742905 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:16:19.744929 systemd[1]: Stopped target network.target - Network. Mar 7 01:16:19.746237 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:16:19.746293 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:16:19.747129 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:16:19.749347 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:16:19.753715 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:16:19.755001 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:16:19.779086 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:16:19.781271 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:16:19.781338 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:16:19.782698 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:16:19.782748 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:16:19.784277 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:16:19.784332 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:16:19.785978 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:16:19.786028 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:16:19.787578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:16:19.789083 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:16:19.791809 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:16:19.792238 systemd-networkd[770]: eth0: DHCPv6 lease lost Mar 7 01:16:19.793868 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:16:19.793976 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:16:19.795427 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:16:19.795742 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:16:19.800431 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:16:19.800498 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:16:19.801978 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:16:19.802035 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:16:19.811338 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:16:19.812176 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:16:19.812251 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:16:19.815589 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:16:19.820383 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:16:19.820507 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:16:19.829062 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:16:19.829157 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:16:19.832847 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:16:19.832908 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:16:19.834671 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:16:19.834720 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:16:19.838217 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:16:19.838393 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:16:19.840191 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:16:19.840356 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:16:19.842851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:16:19.842916 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:16:19.844498 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:16:19.844536 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:16:19.845943 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:16:19.845997 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:16:19.848031 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:16:19.848081 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:16:19.849848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:16:19.849897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:16:19.858374 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:16:19.859131 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:16:19.859187 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:16:19.864783 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:16:19.864850 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:16:19.865957 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:16:19.866023 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:16:19.868345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:16:19.868396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:19.870330 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:16:19.870445 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:16:19.872087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:16:19.878356 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:16:19.888273 systemd[1]: Switching root. Mar 7 01:16:19.926775 systemd-journald[178]: Journal stopped Mar 7 01:16:21.164734 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Mar 7 01:16:21.164771 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:16:21.164783 kernel: SELinux: policy capability open_perms=1 Mar 7 01:16:21.164792 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:16:21.164806 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:16:21.164815 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:16:21.164825 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:16:21.164834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:16:21.164843 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:16:21.164853 kernel: audit: type=1403 audit(1772846180.071:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:16:21.164863 systemd[1]: Successfully loaded SELinux policy in 55.781ms. Mar 7 01:16:21.164877 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.842ms. Mar 7 01:16:21.164889 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:16:21.164902 systemd[1]: Detected virtualization kvm. Mar 7 01:16:21.164920 systemd[1]: Detected architecture x86-64. Mar 7 01:16:21.164937 systemd[1]: Detected first boot. Mar 7 01:16:21.164957 systemd[1]: Initializing machine ID from random generator. Mar 7 01:16:21.164968 zram_generator::config[1039]: No configuration found. Mar 7 01:16:21.164979 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:16:21.164989 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:16:21.165001 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:16:21.165017 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:16:21.165035 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:16:21.165056 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:16:21.165073 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:16:21.165088 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:16:21.165099 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:16:21.165109 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:16:21.165120 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:16:21.165131 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:16:21.165145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:16:21.165156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:16:21.165166 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:16:21.165176 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:16:21.165187 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:16:21.165197 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:16:21.165226 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:16:21.165237 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:16:21.165250 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:16:21.165261 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:16:21.165274 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:16:21.165284 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:16:21.165295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:16:21.165305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:16:21.165315 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:16:21.165326 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:16:21.165338 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:16:21.165348 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:16:21.165359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:16:21.165371 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:16:21.165382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:16:21.165395 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:16:21.165405 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:16:21.165416 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:16:21.165426 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:16:21.165437 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:16:21.165447 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:16:21.165457 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:16:21.165467 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:16:21.165480 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:16:21.165491 systemd[1]: Reached target machines.target - Containers. Mar 7 01:16:21.165501 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:16:21.165512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:16:21.165522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:16:21.165532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:16:21.165542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:16:21.165553 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:16:21.165565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:16:21.165576 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:16:21.165586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:16:21.165784 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:16:21.165794 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:16:21.165805 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:16:21.165815 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:16:21.165825 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:16:21.165837 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:16:21.165848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:16:21.165858 kernel: loop: module loaded Mar 7 01:16:21.165868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:16:21.165878 kernel: ACPI: bus type drm_connector registered Mar 7 01:16:21.165888 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:16:21.165898 kernel: fuse: init (API version 7.39) Mar 7 01:16:21.165908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:16:21.165918 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:16:21.165931 systemd[1]: Stopped verity-setup.service. Mar 7 01:16:21.165941 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:16:21.165974 systemd-journald[1126]: Collecting audit messages is disabled. Mar 7 01:16:21.165995 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:16:21.166009 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:16:21.166019 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:16:21.166029 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:16:21.166042 systemd-journald[1126]: Journal started Mar 7 01:16:21.166060 systemd-journald[1126]: Runtime Journal (/run/log/journal/94b056a300bd4618bf6130f3bbbf8faa) is 8.0M, max 78.3M, 70.3M free. Mar 7 01:16:20.733812 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:16:20.756386 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:16:20.756923 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:16:21.171328 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:16:21.172908 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:16:21.174083 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:16:21.175157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:16:21.176379 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:16:21.177859 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:16:21.178084 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:16:21.179427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:16:21.179890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:16:21.181333 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:16:21.181611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:16:21.183096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:16:21.183425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:16:21.207127 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:16:21.207402 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:16:21.208548 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:16:21.208774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:16:21.209938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:16:21.211283 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:16:21.212745 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:16:21.231185 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:16:21.238261 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:16:21.247924 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:16:21.248796 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:16:21.248887 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:16:21.250971 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:16:21.257649 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:16:21.262311 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:16:21.263183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:16:21.264739 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:16:21.266422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:16:21.267220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:16:21.273353 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:16:21.274304 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:16:21.277306 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:16:21.284346 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:16:21.287298 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:16:21.291048 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:16:21.296467 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:16:21.299379 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:16:21.305365 systemd-journald[1126]: Time spent on flushing to /var/log/journal/94b056a300bd4618bf6130f3bbbf8faa is 60.572ms for 977 entries. Mar 7 01:16:21.305365 systemd-journald[1126]: System Journal (/var/log/journal/94b056a300bd4618bf6130f3bbbf8faa) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:16:21.382460 systemd-journald[1126]: Received client request to flush runtime journal. Mar 7 01:16:21.382497 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:16:21.329180 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:16:21.343810 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:16:21.344982 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:16:21.346528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:16:21.355260 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:16:21.387875 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:16:21.394325 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:16:21.418822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:16:21.427740 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:16:21.428819 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Mar 7 01:16:21.428851 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Mar 7 01:16:21.431481 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:16:21.441314 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:16:21.433068 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:16:21.440285 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:16:21.451381 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:16:21.488242 kernel: loop2: detected capacity change from 0 to 8 Mar 7 01:16:21.517251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:16:21.522845 kernel: loop3: detected capacity change from 0 to 219192 Mar 7 01:16:21.531348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:16:21.573344 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 7 01:16:21.573366 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 7 01:16:21.578526 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:16:21.584500 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:16:21.606397 kernel: loop5: detected capacity change from 0 to 140768 Mar 7 01:16:21.629398 kernel: loop6: detected capacity change from 0 to 8 Mar 7 01:16:21.634220 kernel: loop7: detected capacity change from 0 to 219192 Mar 7 01:16:21.657647 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 7 01:16:21.665158 (sd-merge)[1187]: Merged extensions into '/usr'. Mar 7 01:16:21.670898 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:16:21.671114 systemd[1]: Reloading... Mar 7 01:16:21.791234 zram_generator::config[1217]: No configuration found. Mar 7 01:16:21.885134 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:16:21.957836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:22.008292 systemd[1]: Reloading finished in 336 ms. Mar 7 01:16:22.044347 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:16:22.045692 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:16:22.047069 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:16:22.056380 systemd[1]: Starting ensure-sysext.service... Mar 7 01:16:22.058349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:16:22.064383 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:16:22.076568 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:16:22.076587 systemd[1]: Reloading... Mar 7 01:16:22.104078 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Mar 7 01:16:22.107995 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:16:22.108441 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:16:22.111556 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:16:22.112115 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 7 01:16:22.112598 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 7 01:16:22.118856 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:16:22.119238 systemd-tmpfiles[1259]: Skipping /boot Mar 7 01:16:22.145115 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:16:22.145129 systemd-tmpfiles[1259]: Skipping /boot Mar 7 01:16:22.189248 zram_generator::config[1298]: No configuration found. Mar 7 01:16:22.363256 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1288) Mar 7 01:16:22.396227 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:16:22.401762 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:16:22.402231 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:16:22.409824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:22.429288 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:16:22.479315 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:16:22.483062 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:16:22.484249 systemd[1]: Reloading finished in 407 ms. Mar 7 01:16:22.504291 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:16:22.525267 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:16:22.527379 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:16:22.532292 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:16:22.531268 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:16:22.561561 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:16:22.573708 systemd[1]: Finished ensure-sysext.service. Mar 7 01:16:22.582196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 01:16:22.583189 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:16:22.589347 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:16:22.594394 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:16:22.597427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:16:22.599410 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:16:22.605756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:16:22.608190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:16:22.615512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:16:22.620006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:16:22.621405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:16:22.631729 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:16:22.637321 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:16:22.646287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:16:22.651773 lvm[1369]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:16:22.655365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:16:22.669321 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:16:22.671352 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:16:22.675346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:16:22.676088 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:16:22.677953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:16:22.679263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:16:22.680640 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:16:22.681123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:16:22.684151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:16:22.684364 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:16:22.686778 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:16:22.686944 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:16:22.688903 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:16:22.701331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:16:22.702265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:16:22.713355 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:16:22.736345 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:16:22.739150 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:16:22.741269 augenrules[1403]: No rules Mar 7 01:16:22.742159 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:16:22.747635 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:16:22.751804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:16:22.758383 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:16:22.763343 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:16:22.769563 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:16:22.784429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:16:22.787696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:16:22.791857 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:16:22.796290 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:16:22.804306 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:16:22.914841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:16:22.933730 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:16:22.935386 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:16:22.941196 systemd-networkd[1381]: lo: Link UP Mar 7 01:16:22.941661 systemd-networkd[1381]: lo: Gained carrier Mar 7 01:16:22.944408 systemd-networkd[1381]: Enumeration completed Mar 7 01:16:22.945085 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:16:22.945158 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:16:22.946290 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:16:22.950064 systemd-networkd[1381]: eth0: Link UP Mar 7 01:16:22.950138 systemd-networkd[1381]: eth0: Gained carrier Mar 7 01:16:22.950280 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:16:22.951672 systemd-resolved[1383]: Positive Trust Anchors: Mar 7 01:16:22.951686 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:16:22.951714 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:16:22.955369 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:16:22.958519 systemd-resolved[1383]: Defaulting to hostname 'linux'. Mar 7 01:16:22.969412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:16:22.970810 systemd[1]: Reached target network.target - Network. Mar 7 01:16:22.971607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:16:22.972646 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:16:22.973798 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:16:22.974914 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:16:22.976024 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:16:22.977087 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:16:22.977881 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:16:22.978678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:16:22.978717 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:16:22.979425 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:16:22.982240 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:16:22.984600 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:16:22.990079 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:16:22.991387 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:16:22.992232 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:16:22.992955 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:16:22.993853 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:16:22.993889 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:16:22.994993 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:16:22.998353 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:16:23.003377 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:16:23.006577 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:16:23.009062 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:16:23.010677 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:16:23.012354 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:16:23.016291 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:16:23.020826 jq[1436]: false Mar 7 01:16:23.021392 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:16:23.036367 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:16:23.054365 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:16:23.055568 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:16:23.056014 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:16:23.058400 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:16:23.065468 extend-filesystems[1437]: Found loop4 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found loop5 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found loop6 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found loop7 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda1 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda2 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda3 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found usr Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda4 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda6 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda7 Mar 7 01:16:23.103833 extend-filesystems[1437]: Found sda9 Mar 7 01:16:23.103833 extend-filesystems[1437]: Checking size of /dev/sda9 Mar 7 01:16:23.068311 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:16:23.073431 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:16:23.136704 jq[1447]: true Mar 7 01:16:23.074232 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:16:23.075736 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:16:23.076266 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:16:23.139017 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:16:23.143185 dbus-daemon[1435]: [system] SELinux support is enabled Mar 7 01:16:23.143517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:16:23.144773 jq[1466]: true Mar 7 01:16:23.152501 extend-filesystems[1437]: Resized partition /dev/sda9 Mar 7 01:16:23.155288 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:16:23.157474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:16:23.157512 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:16:23.163218 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 7 01:16:23.160613 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:16:23.160635 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:16:23.166414 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:16:23.166638 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:16:23.171624 update_engine[1446]: I20260307 01:16:23.171550 1446 main.cc:92] Flatcar Update Engine starting Mar 7 01:16:23.180478 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:16:23.183374 update_engine[1446]: I20260307 01:16:23.183331 1446 update_check_scheduler.cc:74] Next update check in 7m39s Mar 7 01:16:23.186387 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:16:23.196240 tar[1452]: linux-amd64/LICENSE Mar 7 01:16:23.196486 coreos-metadata[1434]: Mar 07 01:16:23.196 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:16:23.197414 tar[1452]: linux-amd64/helm Mar 7 01:16:23.240464 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:16:23.240834 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:16:23.241842 systemd-logind[1444]: New seat seat0. Mar 7 01:16:23.246557 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:16:23.348245 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1289) Mar 7 01:16:23.354869 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:16:23.355891 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:16:23.370004 systemd[1]: Starting sshkeys.service... Mar 7 01:16:23.418258 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:16:23.427678 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:16:23.464221 containerd[1467]: time="2026-03-07T01:16:23.463462508Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:16:23.475987 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:16:23.513637 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:16:23.522883 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:16:23.530423 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:16:23.530641 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:16:23.536215 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 7 01:16:23.540972 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:16:23.550255 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:16:23.552399 coreos-metadata[1501]: Mar 07 01:16:23.552 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 7 01:16:23.552625 extend-filesystems[1474]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 01:16:23.552625 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 7 01:16:23.552625 extend-filesystems[1474]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 7 01:16:23.586138 extend-filesystems[1437]: Resized filesystem in /dev/sda9 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.557154334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.559547706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.559570966Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.559585916Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.559748946Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.559764636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.559836186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.559848686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.560038526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.560052836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:16:23.588856 containerd[1467]: time="2026-03-07T01:16:23.560064646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:16:23.554341 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.560073636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.560161156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.560449536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.560559396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.560571856Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.560665686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.560718116Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.567594640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.567637370Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.567840000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.567854190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.567875850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:16:23.589949 containerd[1467]: time="2026-03-07T01:16:23.567994480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:16:23.554559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568178850Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568299110Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568313790Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568325330Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568337610Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568349800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568362780Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568374510Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568386660Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568398000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568408900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568418470Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568436800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590287 containerd[1467]: time="2026-03-07T01:16:23.568448470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.557851 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568459440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568470270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568481560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568493000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568692720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568708400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568720390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568733510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568745500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568762540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568774370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568788490Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568806730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568817360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590540 containerd[1467]: time="2026-03-07T01:16:23.568827480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:16:23.584828 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568863250Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568882890Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568893070Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568903780Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568912920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568927750Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568937080Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:16:23.590804 containerd[1467]: time="2026-03-07T01:16:23.568946730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.569165340Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.569263050Z" level=info msg="Connect containerd service" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.569354140Z" level=info msg="using legacy CRI server" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.569364410Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.569436400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570048711Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570158791Z" level=info msg="Start subscribing containerd event" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570217701Z" level=info msg="Start recovering state" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570273571Z" level=info msg="Start event monitor" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570288561Z" level=info msg="Start snapshots syncer" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570297211Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570305911Z" level=info msg="Start streaming server" Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570722861Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.570789441Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:16:23.590932 containerd[1467]: time="2026-03-07T01:16:23.573214232Z" level=info msg="containerd successfully booted in 0.114962s" Mar 7 01:16:23.596243 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:16:23.603840 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:16:23.605091 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:16:23.723319 systemd-networkd[1381]: eth0: DHCPv4 address 172.236.123.47/24, gateway 172.236.123.1 acquired from 23.213.15.224 Mar 7 01:16:23.724531 dbus-daemon[1435]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1381 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:16:23.726797 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:23.737010 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:16:23.801466 dbus-daemon[1435]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:16:23.801953 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:16:23.803406 dbus-daemon[1435]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1531 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:16:23.812483 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:16:23.823831 polkitd[1532]: Started polkitd version 121 Mar 7 01:16:23.828536 polkitd[1532]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:16:23.828650 polkitd[1532]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:16:23.829387 polkitd[1532]: Finished loading, compiling and executing 2 rules Mar 7 01:16:23.829757 dbus-daemon[1435]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:16:23.829923 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:16:23.831177 polkitd[1532]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:16:23.841968 systemd-resolved[1383]: System hostname changed to '172-236-123-47'. Mar 7 01:16:23.842067 systemd-hostnamed[1531]: Hostname set to <172-236-123-47> (transient) Mar 7 01:16:23.943907 tar[1452]: linux-amd64/README.md Mar 7 01:16:23.957090 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:16:24.173406 systemd-networkd[1381]: eth0: Gained IPv6LL Mar 7 01:16:24.174023 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:24.176765 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:16:24.178297 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:16:24.197808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:24.200466 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:16:24.217957 coreos-metadata[1434]: Mar 07 01:16:24.217 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 7 01:16:24.236999 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:16:24.313989 coreos-metadata[1434]: Mar 07 01:16:24.313 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 7 01:16:24.494193 coreos-metadata[1434]: Mar 07 01:16:24.494 INFO Fetch successful Mar 7 01:16:24.494411 coreos-metadata[1434]: Mar 07 01:16:24.494 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 7 01:16:24.563329 coreos-metadata[1501]: Mar 07 01:16:24.562 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 7 01:16:24.653048 coreos-metadata[1501]: Mar 07 01:16:24.652 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 7 01:16:24.760829 coreos-metadata[1434]: Mar 07 01:16:24.759 INFO Fetch successful Mar 7 01:16:24.794732 coreos-metadata[1501]: Mar 07 01:16:24.794 INFO Fetch successful Mar 7 01:16:24.817632 update-ssh-keys[1567]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:16:24.818654 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:16:24.821652 systemd[1]: Finished sshkeys.service. Mar 7 01:16:24.852964 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:16:24.855314 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:24.856091 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:16:25.175364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:25.176975 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:16:25.213247 systemd[1]: Startup finished in 1.070s (kernel) + 8.348s (initrd) + 5.196s (userspace) = 14.616s. Mar 7 01:16:25.222572 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:16:25.725991 kubelet[1587]: E0307 01:16:25.725928 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:16:25.729652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:16:25.733540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:16:26.730084 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:16:26.738409 systemd[1]: Started sshd@0-172.236.123.47:22-68.220.241.50:48680.service - OpenSSH per-connection server daemon (68.220.241.50:48680). Mar 7 01:16:26.863091 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:26.890297 sshd[1599]: Accepted publickey for core from 68.220.241.50 port 48680 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:16:26.892564 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:26.900863 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:16:26.912794 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:16:26.915490 systemd-logind[1444]: New session 1 of user core. Mar 7 01:16:26.926570 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:16:26.932850 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:16:26.945392 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:16:27.042238 systemd[1603]: Queued start job for default target default.target. Mar 7 01:16:27.053620 systemd[1603]: Created slice app.slice - User Application Slice. Mar 7 01:16:27.053648 systemd[1603]: Reached target paths.target - Paths. Mar 7 01:16:27.053662 systemd[1603]: Reached target timers.target - Timers. Mar 7 01:16:27.055543 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:16:27.073676 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:16:27.073800 systemd[1603]: Reached target sockets.target - Sockets. Mar 7 01:16:27.073815 systemd[1603]: Reached target basic.target - Basic System. Mar 7 01:16:27.073853 systemd[1603]: Reached target default.target - Main User Target. Mar 7 01:16:27.073888 systemd[1603]: Startup finished in 122ms. Mar 7 01:16:27.074168 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:16:27.082328 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:16:27.221901 systemd[1]: Started sshd@1-172.236.123.47:22-68.220.241.50:48682.service - OpenSSH per-connection server daemon (68.220.241.50:48682). Mar 7 01:16:27.402251 sshd[1614]: Accepted publickey for core from 68.220.241.50 port 48682 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:16:27.403897 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:27.408904 systemd-logind[1444]: New session 2 of user core. Mar 7 01:16:27.419354 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:16:27.550323 sshd[1614]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:27.553949 systemd[1]: sshd@1-172.236.123.47:22-68.220.241.50:48682.service: Deactivated successfully. Mar 7 01:16:27.556243 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:16:27.556745 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:16:27.557555 systemd-logind[1444]: Removed session 2. Mar 7 01:16:27.584348 systemd[1]: Started sshd@2-172.236.123.47:22-68.220.241.50:48698.service - OpenSSH per-connection server daemon (68.220.241.50:48698). Mar 7 01:16:27.761530 sshd[1621]: Accepted publickey for core from 68.220.241.50 port 48698 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:16:27.762074 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:27.767617 systemd-logind[1444]: New session 3 of user core. Mar 7 01:16:27.777358 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:16:27.904276 sshd[1621]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:27.908115 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:16:27.909150 systemd[1]: sshd@2-172.236.123.47:22-68.220.241.50:48698.service: Deactivated successfully. Mar 7 01:16:27.911131 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:16:27.912061 systemd-logind[1444]: Removed session 3. Mar 7 01:16:27.935978 systemd[1]: Started sshd@3-172.236.123.47:22-68.220.241.50:48704.service - OpenSSH per-connection server daemon (68.220.241.50:48704). Mar 7 01:16:28.096663 sshd[1628]: Accepted publickey for core from 68.220.241.50 port 48704 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:16:28.098027 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:28.102899 systemd-logind[1444]: New session 4 of user core. Mar 7 01:16:28.110334 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:16:28.233406 sshd[1628]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:28.236837 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:16:28.237668 systemd[1]: sshd@3-172.236.123.47:22-68.220.241.50:48704.service: Deactivated successfully. Mar 7 01:16:28.239632 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:16:28.240417 systemd-logind[1444]: Removed session 4. Mar 7 01:16:28.263142 systemd[1]: Started sshd@4-172.236.123.47:22-68.220.241.50:48708.service - OpenSSH per-connection server daemon (68.220.241.50:48708). Mar 7 01:16:28.414791 sshd[1635]: Accepted publickey for core from 68.220.241.50 port 48708 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:16:28.415416 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:28.420293 systemd-logind[1444]: New session 5 of user core. Mar 7 01:16:28.426327 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:16:28.537899 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:16:28.538292 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:28.555146 sudo[1638]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:28.576740 sshd[1635]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:28.580139 systemd[1]: sshd@4-172.236.123.47:22-68.220.241.50:48708.service: Deactivated successfully. Mar 7 01:16:28.583019 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:16:28.585291 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:16:28.586583 systemd-logind[1444]: Removed session 5. Mar 7 01:16:28.610496 systemd[1]: Started sshd@5-172.236.123.47:22-68.220.241.50:48722.service - OpenSSH per-connection server daemon (68.220.241.50:48722). Mar 7 01:16:28.767015 sshd[1643]: Accepted publickey for core from 68.220.241.50 port 48722 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:16:28.767635 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:28.773085 systemd-logind[1444]: New session 6 of user core. Mar 7 01:16:28.778355 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:16:28.875741 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:16:28.876092 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:28.879570 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:28.885388 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:16:28.885720 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:28.905485 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:16:28.907755 auditctl[1650]: No rules Mar 7 01:16:28.909053 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:16:28.909323 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:16:28.911604 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:16:28.950306 augenrules[1668]: No rules Mar 7 01:16:28.951667 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:16:28.952765 sudo[1646]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:28.974784 sshd[1643]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:28.977955 systemd[1]: sshd@5-172.236.123.47:22-68.220.241.50:48722.service: Deactivated successfully. Mar 7 01:16:28.980007 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:16:28.981252 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:16:28.982276 systemd-logind[1444]: Removed session 6. Mar 7 01:16:29.011765 systemd[1]: Started sshd@6-172.236.123.47:22-68.220.241.50:48724.service - OpenSSH per-connection server daemon (68.220.241.50:48724). Mar 7 01:16:29.170297 sshd[1676]: Accepted publickey for core from 68.220.241.50 port 48724 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:16:29.172339 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:29.176576 systemd-logind[1444]: New session 7 of user core. Mar 7 01:16:29.182337 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:16:29.286042 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:16:29.286412 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:29.566608 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:16:29.567270 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:16:29.833903 dockerd[1695]: time="2026-03-07T01:16:29.833778130Z" level=info msg="Starting up" Mar 7 01:16:29.927749 dockerd[1695]: time="2026-03-07T01:16:29.927445087Z" level=info msg="Loading containers: start." Mar 7 01:16:30.033228 kernel: Initializing XFRM netlink socket Mar 7 01:16:30.057716 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:30.060806 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:30.069717 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:30.119014 systemd-networkd[1381]: docker0: Link UP Mar 7 01:16:30.119673 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Mar 7 01:16:30.141192 dockerd[1695]: time="2026-03-07T01:16:30.141151404Z" level=info msg="Loading containers: done." Mar 7 01:16:30.160415 dockerd[1695]: time="2026-03-07T01:16:30.160381624Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:16:30.160555 dockerd[1695]: time="2026-03-07T01:16:30.160472184Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:16:30.160661 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2393679064-merged.mount: Deactivated successfully. Mar 7 01:16:30.161153 dockerd[1695]: time="2026-03-07T01:16:30.161133264Z" level=info msg="Daemon has completed initialization" Mar 7 01:16:30.194270 dockerd[1695]: time="2026-03-07T01:16:30.194195001Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:16:30.194518 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:16:30.682931 containerd[1467]: time="2026-03-07T01:16:30.682881375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:16:31.350422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178980925.mount: Deactivated successfully. Mar 7 01:16:32.474517 containerd[1467]: time="2026-03-07T01:16:32.474454930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:32.475754 containerd[1467]: time="2026-03-07T01:16:32.475680531Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074503" Mar 7 01:16:32.476224 containerd[1467]: time="2026-03-07T01:16:32.476152261Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:32.480173 containerd[1467]: time="2026-03-07T01:16:32.478988402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:32.480173 containerd[1467]: time="2026-03-07T01:16:32.479997953Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.797063958s" Mar 7 01:16:32.480173 containerd[1467]: time="2026-03-07T01:16:32.480043703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:16:32.480633 containerd[1467]: time="2026-03-07T01:16:32.480596693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:16:33.662734 containerd[1467]: time="2026-03-07T01:16:33.661615993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.662734 containerd[1467]: time="2026-03-07T01:16:33.662683064Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165829" Mar 7 01:16:33.663361 containerd[1467]: time="2026-03-07T01:16:33.663336904Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.666109 containerd[1467]: time="2026-03-07T01:16:33.666047485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.667372 containerd[1467]: time="2026-03-07T01:16:33.667343096Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.186650683s" Mar 7 01:16:33.667463 containerd[1467]: time="2026-03-07T01:16:33.667437036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:16:33.670533 containerd[1467]: time="2026-03-07T01:16:33.670490317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:16:34.675107 containerd[1467]: time="2026-03-07T01:16:34.674018899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:34.675107 containerd[1467]: time="2026-03-07T01:16:34.675012849Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729830" Mar 7 01:16:34.675107 containerd[1467]: time="2026-03-07T01:16:34.675069129Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:34.677870 containerd[1467]: time="2026-03-07T01:16:34.677825511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:34.679180 containerd[1467]: time="2026-03-07T01:16:34.679154021Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.008610573s" Mar 7 01:16:34.679290 containerd[1467]: time="2026-03-07T01:16:34.679272592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:16:34.679716 containerd[1467]: time="2026-03-07T01:16:34.679689622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:16:35.789361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321853705.mount: Deactivated successfully. Mar 7 01:16:35.791065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:16:35.801380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:35.973273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:35.976355 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:16:36.027445 kubelet[1917]: E0307 01:16:36.025447 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:16:36.032413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:16:36.032606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:16:36.144900 containerd[1467]: time="2026-03-07T01:16:36.144038143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:36.144900 containerd[1467]: time="2026-03-07T01:16:36.144864154Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861776" Mar 7 01:16:36.145399 containerd[1467]: time="2026-03-07T01:16:36.145374994Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:36.146968 containerd[1467]: time="2026-03-07T01:16:36.146946255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:36.147986 containerd[1467]: time="2026-03-07T01:16:36.147948675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.468228013s" Mar 7 01:16:36.147986 containerd[1467]: time="2026-03-07T01:16:36.147983815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:16:36.148855 containerd[1467]: time="2026-03-07T01:16:36.148685736Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:16:36.657456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788451796.mount: Deactivated successfully. Mar 7 01:16:37.428302 containerd[1467]: time="2026-03-07T01:16:37.428254535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:37.429467 containerd[1467]: time="2026-03-07T01:16:37.429430426Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Mar 7 01:16:37.430568 containerd[1467]: time="2026-03-07T01:16:37.430249376Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:37.433951 containerd[1467]: time="2026-03-07T01:16:37.433922798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:37.435111 containerd[1467]: time="2026-03-07T01:16:37.435083758Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.286372082s" Mar 7 01:16:37.435158 containerd[1467]: time="2026-03-07T01:16:37.435116109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:16:37.435975 containerd[1467]: time="2026-03-07T01:16:37.435946769Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:16:37.931379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207215820.mount: Deactivated successfully. Mar 7 01:16:37.936318 containerd[1467]: time="2026-03-07T01:16:37.936288779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:37.937291 containerd[1467]: time="2026-03-07T01:16:37.937228319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Mar 7 01:16:37.937738 containerd[1467]: time="2026-03-07T01:16:37.937701290Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:37.941308 containerd[1467]: time="2026-03-07T01:16:37.940478021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:37.941308 containerd[1467]: time="2026-03-07T01:16:37.941196561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 505.221452ms" Mar 7 01:16:37.941308 containerd[1467]: time="2026-03-07T01:16:37.941237011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:16:37.942243 containerd[1467]: time="2026-03-07T01:16:37.942191122Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:16:38.521277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969484417.mount: Deactivated successfully. Mar 7 01:16:39.190410 containerd[1467]: time="2026-03-07T01:16:39.190361306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:39.191693 containerd[1467]: time="2026-03-07T01:16:39.191585766Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860680" Mar 7 01:16:39.192808 containerd[1467]: time="2026-03-07T01:16:39.192499717Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:39.195401 containerd[1467]: time="2026-03-07T01:16:39.195377608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:39.196405 containerd[1467]: time="2026-03-07T01:16:39.196377659Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.254128577s" Mar 7 01:16:39.196442 containerd[1467]: time="2026-03-07T01:16:39.196406989Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:16:42.110321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:42.115377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:42.148599 systemd[1]: Reloading requested from client PID 2069 ('systemctl') (unit session-7.scope)... Mar 7 01:16:42.148618 systemd[1]: Reloading... Mar 7 01:16:42.297221 zram_generator::config[2112]: No configuration found. Mar 7 01:16:42.406829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:42.483226 systemd[1]: Reloading finished in 334 ms. Mar 7 01:16:42.537262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:42.542104 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:42.542780 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:16:42.543032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:42.548655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:42.705112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:42.713513 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:16:42.747646 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:16:42.747646 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:16:42.747981 kubelet[2165]: I0307 01:16:42.747717 2165 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:16:43.181658 kubelet[2165]: I0307 01:16:43.181616 2165 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:16:43.181658 kubelet[2165]: I0307 01:16:43.181648 2165 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:16:43.181791 kubelet[2165]: I0307 01:16:43.181685 2165 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:16:43.181791 kubelet[2165]: I0307 01:16:43.181695 2165 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:16:43.181971 kubelet[2165]: I0307 01:16:43.181946 2165 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:16:43.190569 kubelet[2165]: E0307 01:16:43.190532 2165 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.123.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:16:43.191789 kubelet[2165]: I0307 01:16:43.190661 2165 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:16:43.194213 kubelet[2165]: E0307 01:16:43.194176 2165 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:16:43.194326 kubelet[2165]: I0307 01:16:43.194312 2165 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:16:43.198247 kubelet[2165]: I0307 01:16:43.198230 2165 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:16:43.200263 kubelet[2165]: I0307 01:16:43.200228 2165 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:16:43.200403 kubelet[2165]: I0307 01:16:43.200264 2165 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-123-47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:16:43.200403 kubelet[2165]: I0307 01:16:43.200399 2165 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:16:43.200512 kubelet[2165]: I0307 01:16:43.200408 2165 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:16:43.200512 kubelet[2165]: I0307 01:16:43.200504 2165 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:16:43.202274 kubelet[2165]: I0307 01:16:43.202257 2165 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:43.202749 kubelet[2165]: I0307 01:16:43.202418 2165 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:16:43.202749 kubelet[2165]: I0307 01:16:43.202434 2165 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:16:43.202749 kubelet[2165]: I0307 01:16:43.202455 2165 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:16:43.202749 kubelet[2165]: I0307 01:16:43.202469 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:16:43.204148 kubelet[2165]: E0307 01:16:43.204112 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.123.47:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-123-47&limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:16:43.204367 kubelet[2165]: I0307 01:16:43.204349 2165 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:16:43.204726 kubelet[2165]: I0307 01:16:43.204707 2165 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:16:43.204761 kubelet[2165]: I0307 01:16:43.204735 2165 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:16:43.204795 kubelet[2165]: W0307 01:16:43.204783 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:16:43.208461 kubelet[2165]: I0307 01:16:43.208448 2165 server.go:1262] "Started kubelet" Mar 7 01:16:43.208642 kubelet[2165]: E0307 01:16:43.208625 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.123.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:16:43.210566 kubelet[2165]: I0307 01:16:43.210553 2165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:16:43.211272 kubelet[2165]: E0307 01:16:43.211070 2165 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:16:43.211272 kubelet[2165]: I0307 01:16:43.211110 2165 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:16:43.212614 kubelet[2165]: I0307 01:16:43.212593 2165 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:16:43.216969 kubelet[2165]: I0307 01:16:43.216948 2165 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:16:43.217293 kubelet[2165]: I0307 01:16:43.217065 2165 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:16:43.217368 kubelet[2165]: I0307 01:16:43.217355 2165 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:16:43.220587 kubelet[2165]: I0307 01:16:43.220574 2165 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:16:43.220780 kubelet[2165]: E0307 01:16:43.220763 2165 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-123-47\" not found" Mar 7 01:16:43.221774 kubelet[2165]: I0307 01:16:43.221757 2165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:16:43.223793 kubelet[2165]: I0307 01:16:43.223762 2165 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:16:43.223844 kubelet[2165]: I0307 01:16:43.223815 2165 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:16:43.226501 kubelet[2165]: E0307 01:16:43.225291 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.123.47:6443/api/v1/namespaces/default/events\": dial tcp 172.236.123.47:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-123-47.189a6a3a97a36e93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-123-47,UID:172-236-123-47,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-123-47,},FirstTimestamp:2026-03-07 01:16:43.208429203 +0000 UTC m=+0.491471596,LastTimestamp:2026-03-07 01:16:43.208429203 +0000 UTC m=+0.491471596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-123-47,}" Mar 7 01:16:43.226573 kubelet[2165]: E0307 01:16:43.226543 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.123.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-123-47?timeout=10s\": dial tcp 172.236.123.47:6443: connect: connection refused" interval="200ms" Mar 7 01:16:43.227249 kubelet[2165]: I0307 01:16:43.227232 2165 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:16:43.228260 kubelet[2165]: I0307 01:16:43.227328 2165 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:16:43.232228 kubelet[2165]: E0307 01:16:43.229823 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.123.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:16:43.232276 kubelet[2165]: I0307 01:16:43.232267 2165 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:16:43.237506 kubelet[2165]: I0307 01:16:43.237478 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:16:43.244508 kubelet[2165]: I0307 01:16:43.244485 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:16:43.244508 kubelet[2165]: I0307 01:16:43.244503 2165 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:16:43.244582 kubelet[2165]: I0307 01:16:43.244521 2165 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:16:43.244582 kubelet[2165]: E0307 01:16:43.244557 2165 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:16:43.255069 kubelet[2165]: E0307 01:16:43.255039 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.123.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:16:43.261701 kubelet[2165]: I0307 01:16:43.261684 2165 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:16:43.261701 kubelet[2165]: I0307 01:16:43.261699 2165 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:16:43.262020 kubelet[2165]: I0307 01:16:43.261902 2165 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:43.263513 kubelet[2165]: I0307 01:16:43.263498 2165 policy_none.go:49] "None policy: Start" Mar 7 01:16:43.263566 kubelet[2165]: I0307 01:16:43.263521 2165 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:16:43.263566 kubelet[2165]: I0307 01:16:43.263532 2165 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:16:43.264134 kubelet[2165]: I0307 01:16:43.264121 2165 policy_none.go:47] "Start" Mar 7 01:16:43.270005 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:16:43.287820 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:16:43.292053 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:16:43.301003 kubelet[2165]: E0307 01:16:43.300986 2165 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:16:43.301237 kubelet[2165]: I0307 01:16:43.301224 2165 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:16:43.301506 kubelet[2165]: I0307 01:16:43.301470 2165 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:16:43.302542 kubelet[2165]: I0307 01:16:43.302531 2165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:16:43.303707 kubelet[2165]: E0307 01:16:43.303688 2165 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:16:43.303758 kubelet[2165]: E0307 01:16:43.303719 2165 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-123-47\" not found" Mar 7 01:16:43.355170 systemd[1]: Created slice kubepods-burstable-pod9127c502dd305c52e8f5d544720a58ad.slice - libcontainer container kubepods-burstable-pod9127c502dd305c52e8f5d544720a58ad.slice. Mar 7 01:16:43.365149 kubelet[2165]: E0307 01:16:43.365121 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:43.367118 systemd[1]: Created slice kubepods-burstable-podfbaa95c0ddb2e98e933b20d5ebf3770b.slice - libcontainer container kubepods-burstable-podfbaa95c0ddb2e98e933b20d5ebf3770b.slice. Mar 7 01:16:43.369344 kubelet[2165]: E0307 01:16:43.369321 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:43.378489 systemd[1]: Created slice kubepods-burstable-pod7bc6b2efd52fd0c137e66a51936b76e4.slice - libcontainer container kubepods-burstable-pod7bc6b2efd52fd0c137e66a51936b76e4.slice. Mar 7 01:16:43.380587 kubelet[2165]: E0307 01:16:43.380572 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:43.403783 kubelet[2165]: I0307 01:16:43.403764 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-236-123-47" Mar 7 01:16:43.404086 kubelet[2165]: E0307 01:16:43.404027 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.123.47:6443/api/v1/nodes\": dial tcp 172.236.123.47:6443: connect: connection refused" node="172-236-123-47" Mar 7 01:16:43.427903 kubelet[2165]: E0307 01:16:43.427879 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.123.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-123-47?timeout=10s\": dial tcp 172.236.123.47:6443: connect: connection refused" interval="400ms" Mar 7 01:16:43.525533 kubelet[2165]: I0307 01:16:43.525431 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9127c502dd305c52e8f5d544720a58ad-k8s-certs\") pod \"kube-apiserver-172-236-123-47\" (UID: \"9127c502dd305c52e8f5d544720a58ad\") " pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:43.525533 kubelet[2165]: I0307 01:16:43.525465 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9127c502dd305c52e8f5d544720a58ad-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-123-47\" (UID: \"9127c502dd305c52e8f5d544720a58ad\") " pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:43.525533 kubelet[2165]: I0307 01:16:43.525483 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-ca-certs\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:43.525533 kubelet[2165]: I0307 01:16:43.525498 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-flexvolume-dir\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:43.525533 kubelet[2165]: I0307 01:16:43.525510 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-kubeconfig\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:43.525900 kubelet[2165]: I0307 01:16:43.525525 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:43.525900 kubelet[2165]: I0307 01:16:43.525538 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7bc6b2efd52fd0c137e66a51936b76e4-kubeconfig\") pod \"kube-scheduler-172-236-123-47\" (UID: \"7bc6b2efd52fd0c137e66a51936b76e4\") " pod="kube-system/kube-scheduler-172-236-123-47" Mar 7 01:16:43.525900 kubelet[2165]: I0307 01:16:43.525551 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9127c502dd305c52e8f5d544720a58ad-ca-certs\") pod \"kube-apiserver-172-236-123-47\" (UID: \"9127c502dd305c52e8f5d544720a58ad\") " pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:43.525900 kubelet[2165]: I0307 01:16:43.525565 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-k8s-certs\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:43.606060 kubelet[2165]: I0307 01:16:43.606024 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-236-123-47" Mar 7 01:16:43.606405 kubelet[2165]: E0307 01:16:43.606382 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.123.47:6443/api/v1/nodes\": dial tcp 172.236.123.47:6443: connect: connection refused" node="172-236-123-47" Mar 7 01:16:43.667144 kubelet[2165]: E0307 01:16:43.667120 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:43.667901 containerd[1467]: time="2026-03-07T01:16:43.667859773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-123-47,Uid:9127c502dd305c52e8f5d544720a58ad,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:43.671264 kubelet[2165]: E0307 01:16:43.671247 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:43.671601 containerd[1467]: time="2026-03-07T01:16:43.671566885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-123-47,Uid:fbaa95c0ddb2e98e933b20d5ebf3770b,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:43.683447 kubelet[2165]: E0307 01:16:43.683033 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:43.683500 containerd[1467]: time="2026-03-07T01:16:43.683285070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-123-47,Uid:7bc6b2efd52fd0c137e66a51936b76e4,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:43.829106 kubelet[2165]: E0307 01:16:43.828902 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.123.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-123-47?timeout=10s\": dial tcp 172.236.123.47:6443: connect: connection refused" interval="800ms" Mar 7 01:16:44.008631 kubelet[2165]: I0307 01:16:44.008307 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-236-123-47" Mar 7 01:16:44.008631 kubelet[2165]: E0307 01:16:44.008595 2165 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.123.47:6443/api/v1/nodes\": dial tcp 172.236.123.47:6443: connect: connection refused" node="172-236-123-47" Mar 7 01:16:44.044269 kubelet[2165]: E0307 01:16:44.044229 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.123.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:16:44.090841 kubelet[2165]: E0307 01:16:44.090748 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.123.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:16:44.159682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482334325.mount: Deactivated successfully. Mar 7 01:16:44.167196 containerd[1467]: time="2026-03-07T01:16:44.167141302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:44.168245 containerd[1467]: time="2026-03-07T01:16:44.168193053Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:44.169011 containerd[1467]: time="2026-03-07T01:16:44.168953153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:16:44.169141 containerd[1467]: time="2026-03-07T01:16:44.169112503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:16:44.169529 containerd[1467]: time="2026-03-07T01:16:44.169483893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:44.170571 containerd[1467]: time="2026-03-07T01:16:44.170367614Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:44.170571 containerd[1467]: time="2026-03-07T01:16:44.170530424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Mar 7 01:16:44.173446 containerd[1467]: time="2026-03-07T01:16:44.173387895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:44.175372 containerd[1467]: time="2026-03-07T01:16:44.175348486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.716431ms" Mar 7 01:16:44.176652 containerd[1467]: time="2026-03-07T01:16:44.176606567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.679994ms" Mar 7 01:16:44.178594 containerd[1467]: time="2026-03-07T01:16:44.178403148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.078768ms" Mar 7 01:16:44.270246 kubelet[2165]: E0307 01:16:44.270096 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.123.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:16:44.303186 containerd[1467]: time="2026-03-07T01:16:44.303088010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:44.303638 containerd[1467]: time="2026-03-07T01:16:44.303327760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:44.303638 containerd[1467]: time="2026-03-07T01:16:44.303565650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:44.304182 containerd[1467]: time="2026-03-07T01:16:44.304047551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:44.311783 containerd[1467]: time="2026-03-07T01:16:44.311503634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:44.311783 containerd[1467]: time="2026-03-07T01:16:44.311555244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:44.311783 containerd[1467]: time="2026-03-07T01:16:44.311566524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:44.311783 containerd[1467]: time="2026-03-07T01:16:44.311654564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:44.329383 containerd[1467]: time="2026-03-07T01:16:44.328749693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:44.329383 containerd[1467]: time="2026-03-07T01:16:44.328809753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:44.329383 containerd[1467]: time="2026-03-07T01:16:44.328824773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:44.329383 containerd[1467]: time="2026-03-07T01:16:44.328899033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:44.351387 systemd[1]: Started cri-containerd-1a9a73a651c9d928505234e68cdadbc6db7bbfdefe5bcf5ec703cbf99ce45c6f.scope - libcontainer container 1a9a73a651c9d928505234e68cdadbc6db7bbfdefe5bcf5ec703cbf99ce45c6f. Mar 7 01:16:44.363167 systemd[1]: Started cri-containerd-267c0982545e8a44d44e28e26e229d2a6971374d9c0262265a8683bcbfb52606.scope - libcontainer container 267c0982545e8a44d44e28e26e229d2a6971374d9c0262265a8683bcbfb52606. Mar 7 01:16:44.387555 systemd[1]: Started cri-containerd-9d4684bebf9508d92d48e02410b0c9fda364c4f5cbfdf7e62f8d5802d7dabfe4.scope - libcontainer container 9d4684bebf9508d92d48e02410b0c9fda364c4f5cbfdf7e62f8d5802d7dabfe4. Mar 7 01:16:44.450515 containerd[1467]: time="2026-03-07T01:16:44.449635553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-123-47,Uid:fbaa95c0ddb2e98e933b20d5ebf3770b,Namespace:kube-system,Attempt:0,} returns sandbox id \"267c0982545e8a44d44e28e26e229d2a6971374d9c0262265a8683bcbfb52606\"" Mar 7 01:16:44.455491 kubelet[2165]: E0307 01:16:44.455161 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:44.460226 containerd[1467]: time="2026-03-07T01:16:44.460174989Z" level=info msg="CreateContainer within sandbox \"267c0982545e8a44d44e28e26e229d2a6971374d9c0262265a8683bcbfb52606\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:16:44.462505 containerd[1467]: time="2026-03-07T01:16:44.462481540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-123-47,Uid:9127c502dd305c52e8f5d544720a58ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d4684bebf9508d92d48e02410b0c9fda364c4f5cbfdf7e62f8d5802d7dabfe4\"" Mar 7 01:16:44.465334 kubelet[2165]: E0307 01:16:44.465306 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:44.468337 containerd[1467]: time="2026-03-07T01:16:44.468278703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-123-47,Uid:7bc6b2efd52fd0c137e66a51936b76e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a9a73a651c9d928505234e68cdadbc6db7bbfdefe5bcf5ec703cbf99ce45c6f\"" Mar 7 01:16:44.469050 containerd[1467]: time="2026-03-07T01:16:44.468960523Z" level=info msg="CreateContainer within sandbox \"9d4684bebf9508d92d48e02410b0c9fda364c4f5cbfdf7e62f8d5802d7dabfe4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:16:44.469463 kubelet[2165]: E0307 01:16:44.469437 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:44.472559 containerd[1467]: time="2026-03-07T01:16:44.472534175Z" level=info msg="CreateContainer within sandbox \"267c0982545e8a44d44e28e26e229d2a6971374d9c0262265a8683bcbfb52606\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2cf198c80c8ae01fc9b794e915a1e199b6a7cb207cbfc8217d6c42021396fb2f\"" Mar 7 01:16:44.473072 containerd[1467]: time="2026-03-07T01:16:44.473052395Z" level=info msg="CreateContainer within sandbox \"1a9a73a651c9d928505234e68cdadbc6db7bbfdefe5bcf5ec703cbf99ce45c6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:16:44.473576 containerd[1467]: time="2026-03-07T01:16:44.473548865Z" level=info msg="StartContainer for \"2cf198c80c8ae01fc9b794e915a1e199b6a7cb207cbfc8217d6c42021396fb2f\"" Mar 7 01:16:44.484310 kubelet[2165]: E0307 01:16:44.484239 2165 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.123.47:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-123-47&limit=500&resourceVersion=0\": dial tcp 172.236.123.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:16:44.486087 containerd[1467]: time="2026-03-07T01:16:44.486042012Z" level=info msg="CreateContainer within sandbox \"9d4684bebf9508d92d48e02410b0c9fda364c4f5cbfdf7e62f8d5802d7dabfe4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9e6e6d3e7bc48403e3a2ee03be3d6c886a3018e7a8275b0839705130c2ef201\"" Mar 7 01:16:44.486799 containerd[1467]: time="2026-03-07T01:16:44.486702212Z" level=info msg="StartContainer for \"d9e6e6d3e7bc48403e3a2ee03be3d6c886a3018e7a8275b0839705130c2ef201\"" Mar 7 01:16:44.491684 containerd[1467]: time="2026-03-07T01:16:44.491593254Z" level=info msg="CreateContainer within sandbox \"1a9a73a651c9d928505234e68cdadbc6db7bbfdefe5bcf5ec703cbf99ce45c6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4863b859e698e01cd7d0aee94420bda69a28959ea9069f45f72032041d6c310\"" Mar 7 01:16:44.492363 containerd[1467]: time="2026-03-07T01:16:44.492338525Z" level=info msg="StartContainer for \"e4863b859e698e01cd7d0aee94420bda69a28959ea9069f45f72032041d6c310\"" Mar 7 01:16:44.515377 systemd[1]: Started cri-containerd-2cf198c80c8ae01fc9b794e915a1e199b6a7cb207cbfc8217d6c42021396fb2f.scope - libcontainer container 2cf198c80c8ae01fc9b794e915a1e199b6a7cb207cbfc8217d6c42021396fb2f. Mar 7 01:16:44.528378 systemd[1]: Started cri-containerd-e4863b859e698e01cd7d0aee94420bda69a28959ea9069f45f72032041d6c310.scope - libcontainer container e4863b859e698e01cd7d0aee94420bda69a28959ea9069f45f72032041d6c310. Mar 7 01:16:44.546384 systemd[1]: Started cri-containerd-d9e6e6d3e7bc48403e3a2ee03be3d6c886a3018e7a8275b0839705130c2ef201.scope - libcontainer container d9e6e6d3e7bc48403e3a2ee03be3d6c886a3018e7a8275b0839705130c2ef201. Mar 7 01:16:44.609225 containerd[1467]: time="2026-03-07T01:16:44.608932453Z" level=info msg="StartContainer for \"2cf198c80c8ae01fc9b794e915a1e199b6a7cb207cbfc8217d6c42021396fb2f\" returns successfully" Mar 7 01:16:44.629728 kubelet[2165]: E0307 01:16:44.629664 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.123.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-123-47?timeout=10s\": dial tcp 172.236.123.47:6443: connect: connection refused" interval="1.6s" Mar 7 01:16:44.638406 containerd[1467]: time="2026-03-07T01:16:44.638303088Z" level=info msg="StartContainer for \"e4863b859e698e01cd7d0aee94420bda69a28959ea9069f45f72032041d6c310\" returns successfully" Mar 7 01:16:44.646967 containerd[1467]: time="2026-03-07T01:16:44.646915722Z" level=info msg="StartContainer for \"d9e6e6d3e7bc48403e3a2ee03be3d6c886a3018e7a8275b0839705130c2ef201\" returns successfully" Mar 7 01:16:44.813235 kubelet[2165]: I0307 01:16:44.813168 2165 kubelet_node_status.go:75] "Attempting to register node" node="172-236-123-47" Mar 7 01:16:45.270693 kubelet[2165]: E0307 01:16:45.270656 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:45.271708 kubelet[2165]: E0307 01:16:45.270832 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:45.274926 kubelet[2165]: E0307 01:16:45.274897 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:45.275060 kubelet[2165]: E0307 01:16:45.275032 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:45.277437 kubelet[2165]: E0307 01:16:45.277398 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:45.277540 kubelet[2165]: E0307 01:16:45.277514 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:46.286020 kubelet[2165]: E0307 01:16:46.285085 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:46.286020 kubelet[2165]: E0307 01:16:46.285272 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:46.286020 kubelet[2165]: E0307 01:16:46.285542 2165 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-123-47\" not found" node="172-236-123-47" Mar 7 01:16:46.286020 kubelet[2165]: E0307 01:16:46.285683 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:46.522480 kubelet[2165]: I0307 01:16:46.522429 2165 kubelet_node_status.go:78] "Successfully registered node" node="172-236-123-47" Mar 7 01:16:46.522480 kubelet[2165]: E0307 01:16:46.522478 2165 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-236-123-47\": node \"172-236-123-47\" not found" Mar 7 01:16:46.628446 kubelet[2165]: E0307 01:16:46.628333 2165 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-236-123-47.189a6a3a97a36e93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-123-47,UID:172-236-123-47,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-123-47,},FirstTimestamp:2026-03-07 01:16:43.208429203 +0000 UTC m=+0.491471596,LastTimestamp:2026-03-07 01:16:43.208429203 +0000 UTC m=+0.491471596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-123-47,}" Mar 7 01:16:46.642584 kubelet[2165]: E0307 01:16:46.642534 2165 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-123-47\" not found" Mar 7 01:16:46.742673 kubelet[2165]: E0307 01:16:46.742611 2165 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-123-47\" not found" Mar 7 01:16:46.843233 kubelet[2165]: E0307 01:16:46.843156 2165 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-123-47\" not found" Mar 7 01:16:46.944474 kubelet[2165]: E0307 01:16:46.943777 2165 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-123-47\" not found" Mar 7 01:16:47.021727 kubelet[2165]: I0307 01:16:47.021650 2165 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-123-47" Mar 7 01:16:47.031147 kubelet[2165]: E0307 01:16:47.031105 2165 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-123-47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-123-47" Mar 7 01:16:47.031147 kubelet[2165]: I0307 01:16:47.031157 2165 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:47.033195 kubelet[2165]: E0307 01:16:47.033166 2165 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-123-47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:47.033288 kubelet[2165]: I0307 01:16:47.033213 2165 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:47.035419 kubelet[2165]: E0307 01:16:47.035398 2165 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-123-47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:47.212238 kubelet[2165]: I0307 01:16:47.211838 2165 apiserver.go:52] "Watching apiserver" Mar 7 01:16:47.225234 kubelet[2165]: I0307 01:16:47.224725 2165 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:16:47.287147 kubelet[2165]: I0307 01:16:47.286386 2165 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:47.296315 kubelet[2165]: E0307 01:16:47.296291 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:47.613420 kernel: hrtimer: interrupt took 5163043 ns Mar 7 01:16:48.264632 systemd[1]: Reloading requested from client PID 2453 ('systemctl') (unit session-7.scope)... Mar 7 01:16:48.264654 systemd[1]: Reloading... Mar 7 01:16:48.292381 kubelet[2165]: E0307 01:16:48.291784 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:48.383240 zram_generator::config[2494]: No configuration found. Mar 7 01:16:48.510454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:48.597099 systemd[1]: Reloading finished in 332 ms. Mar 7 01:16:48.648647 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:48.663380 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:16:48.663686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:48.669630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:48.881404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:48.884117 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:16:48.947656 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:16:48.947656 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:16:48.948177 kubelet[2544]: I0307 01:16:48.947697 2544 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:16:48.955183 kubelet[2544]: I0307 01:16:48.955146 2544 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:16:48.956238 kubelet[2544]: I0307 01:16:48.955275 2544 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:16:48.956238 kubelet[2544]: I0307 01:16:48.955318 2544 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:16:48.956238 kubelet[2544]: I0307 01:16:48.955326 2544 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:16:48.956238 kubelet[2544]: I0307 01:16:48.955671 2544 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:16:48.957172 kubelet[2544]: I0307 01:16:48.957104 2544 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:16:48.962587 kubelet[2544]: I0307 01:16:48.962564 2544 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:16:48.967888 kubelet[2544]: E0307 01:16:48.967846 2544 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:16:48.967954 kubelet[2544]: I0307 01:16:48.967914 2544 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:16:48.972286 kubelet[2544]: I0307 01:16:48.972261 2544 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:16:48.972613 kubelet[2544]: I0307 01:16:48.972558 2544 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:16:48.972765 kubelet[2544]: I0307 01:16:48.972596 2544 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-123-47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:16:48.972765 kubelet[2544]: I0307 01:16:48.972752 2544 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:16:48.972765 kubelet[2544]: I0307 01:16:48.972762 2544 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:16:48.972912 kubelet[2544]: I0307 01:16:48.972793 2544 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:16:48.973066 kubelet[2544]: I0307 01:16:48.973024 2544 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:48.973301 kubelet[2544]: I0307 01:16:48.973267 2544 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:16:48.973301 kubelet[2544]: I0307 01:16:48.973290 2544 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:16:48.973780 kubelet[2544]: I0307 01:16:48.973750 2544 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:16:48.975253 kubelet[2544]: I0307 01:16:48.975221 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:16:48.976749 kubelet[2544]: I0307 01:16:48.976716 2544 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:16:48.978477 kubelet[2544]: I0307 01:16:48.978429 2544 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:16:48.978477 kubelet[2544]: I0307 01:16:48.978468 2544 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:16:48.982928 kubelet[2544]: I0307 01:16:48.982486 2544 server.go:1262] "Started kubelet" Mar 7 01:16:48.985624 kubelet[2544]: I0307 01:16:48.985576 2544 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:16:48.985732 kubelet[2544]: I0307 01:16:48.985718 2544 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:16:48.986189 kubelet[2544]: I0307 01:16:48.986153 2544 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:16:48.986416 kubelet[2544]: I0307 01:16:48.986367 2544 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:16:48.988691 kubelet[2544]: I0307 01:16:48.988676 2544 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:16:48.991228 kubelet[2544]: I0307 01:16:48.990372 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:16:49.003286 kubelet[2544]: I0307 01:16:49.001681 2544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:16:49.003754 kubelet[2544]: I0307 01:16:49.003738 2544 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:16:49.004479 kubelet[2544]: I0307 01:16:49.004439 2544 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:16:49.005107 kubelet[2544]: I0307 01:16:49.005094 2544 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:16:49.006862 kubelet[2544]: I0307 01:16:49.006844 2544 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:16:49.007043 kubelet[2544]: I0307 01:16:49.007013 2544 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:16:49.011557 kubelet[2544]: I0307 01:16:49.011536 2544 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:16:49.027386 kubelet[2544]: I0307 01:16:49.027357 2544 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:16:49.028863 kubelet[2544]: I0307 01:16:49.028849 2544 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:16:49.028918 kubelet[2544]: I0307 01:16:49.028909 2544 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:16:49.028976 kubelet[2544]: I0307 01:16:49.028968 2544 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:16:49.029102 kubelet[2544]: E0307 01:16:49.029084 2544 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:16:49.035361 kubelet[2544]: E0307 01:16:49.035095 2544 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:16:49.126599 kubelet[2544]: I0307 01:16:49.126124 2544 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:16:49.126599 kubelet[2544]: I0307 01:16:49.126578 2544 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:16:49.126599 kubelet[2544]: I0307 01:16:49.126614 2544 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:49.126810 kubelet[2544]: I0307 01:16:49.126755 2544 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:16:49.126810 kubelet[2544]: I0307 01:16:49.126766 2544 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:16:49.126810 kubelet[2544]: I0307 01:16:49.126785 2544 policy_none.go:49] "None policy: Start" Mar 7 01:16:49.126810 kubelet[2544]: I0307 01:16:49.126797 2544 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:16:49.126810 kubelet[2544]: I0307 01:16:49.126808 2544 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:16:49.127143 kubelet[2544]: I0307 01:16:49.127110 2544 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:16:49.127143 kubelet[2544]: I0307 01:16:49.127122 2544 policy_none.go:47] "Start" Mar 7 01:16:49.129504 kubelet[2544]: E0307 01:16:49.129337 2544 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:16:49.135582 kubelet[2544]: E0307 01:16:49.135448 2544 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:16:49.137570 kubelet[2544]: I0307 01:16:49.136492 2544 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:16:49.137570 kubelet[2544]: I0307 01:16:49.136512 2544 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:16:49.137570 kubelet[2544]: I0307 01:16:49.136895 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:16:49.151167 kubelet[2544]: E0307 01:16:49.149917 2544 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:16:49.252131 kubelet[2544]: I0307 01:16:49.252037 2544 kubelet_node_status.go:75] "Attempting to register node" node="172-236-123-47" Mar 7 01:16:49.262441 kubelet[2544]: I0307 01:16:49.262402 2544 kubelet_node_status.go:124] "Node was previously registered" node="172-236-123-47" Mar 7 01:16:49.262561 kubelet[2544]: I0307 01:16:49.262478 2544 kubelet_node_status.go:78] "Successfully registered node" node="172-236-123-47" Mar 7 01:16:49.331242 kubelet[2544]: I0307 01:16:49.330923 2544 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-123-47" Mar 7 01:16:49.331242 kubelet[2544]: I0307 01:16:49.330943 2544 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:49.332440 kubelet[2544]: I0307 01:16:49.332399 2544 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:49.343314 kubelet[2544]: E0307 01:16:49.342903 2544 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-123-47\" already exists" pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:49.409559 kubelet[2544]: I0307 01:16:49.407459 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:49.409559 kubelet[2544]: I0307 01:16:49.407568 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9127c502dd305c52e8f5d544720a58ad-k8s-certs\") pod \"kube-apiserver-172-236-123-47\" (UID: \"9127c502dd305c52e8f5d544720a58ad\") " pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:49.409559 kubelet[2544]: I0307 01:16:49.407629 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-ca-certs\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:49.409559 kubelet[2544]: I0307 01:16:49.407649 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-k8s-certs\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:49.409559 kubelet[2544]: I0307 01:16:49.407666 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-kubeconfig\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:49.410021 kubelet[2544]: I0307 01:16:49.408759 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7bc6b2efd52fd0c137e66a51936b76e4-kubeconfig\") pod \"kube-scheduler-172-236-123-47\" (UID: \"7bc6b2efd52fd0c137e66a51936b76e4\") " pod="kube-system/kube-scheduler-172-236-123-47" Mar 7 01:16:49.410021 kubelet[2544]: I0307 01:16:49.408804 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9127c502dd305c52e8f5d544720a58ad-ca-certs\") pod \"kube-apiserver-172-236-123-47\" (UID: \"9127c502dd305c52e8f5d544720a58ad\") " pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:49.410021 kubelet[2544]: I0307 01:16:49.408845 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9127c502dd305c52e8f5d544720a58ad-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-123-47\" (UID: \"9127c502dd305c52e8f5d544720a58ad\") " pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:49.410021 kubelet[2544]: I0307 01:16:49.408863 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbaa95c0ddb2e98e933b20d5ebf3770b-flexvolume-dir\") pod \"kube-controller-manager-172-236-123-47\" (UID: \"fbaa95c0ddb2e98e933b20d5ebf3770b\") " pod="kube-system/kube-controller-manager-172-236-123-47" Mar 7 01:16:49.641618 kubelet[2544]: E0307 01:16:49.641145 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:49.644671 kubelet[2544]: E0307 01:16:49.644085 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:49.644671 kubelet[2544]: E0307 01:16:49.644277 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:49.977983 kubelet[2544]: I0307 01:16:49.976578 2544 apiserver.go:52] "Watching apiserver" Mar 7 01:16:50.005561 kubelet[2544]: I0307 01:16:50.005394 2544 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:16:50.052834 kubelet[2544]: I0307 01:16:50.052392 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-123-47" podStartSLOduration=3.052336173 podStartE2EDuration="3.052336173s" podCreationTimestamp="2026-03-07 01:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:50.050693552 +0000 UTC m=+1.159900001" watchObservedRunningTime="2026-03-07 01:16:50.052336173 +0000 UTC m=+1.161542612" Mar 7 01:16:50.071707 kubelet[2544]: I0307 01:16:50.071574 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-123-47" podStartSLOduration=1.0715573919999999 podStartE2EDuration="1.071557392s" podCreationTimestamp="2026-03-07 01:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:50.071367442 +0000 UTC m=+1.180573881" watchObservedRunningTime="2026-03-07 01:16:50.071557392 +0000 UTC m=+1.180763841" Mar 7 01:16:50.074904 kubelet[2544]: E0307 01:16:50.074869 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:50.079451 kubelet[2544]: I0307 01:16:50.076549 2544 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:50.079451 kubelet[2544]: I0307 01:16:50.076748 2544 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-123-47" Mar 7 01:16:50.092639 kubelet[2544]: E0307 01:16:50.092385 2544 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-123-47\" already exists" pod="kube-system/kube-scheduler-172-236-123-47" Mar 7 01:16:50.092639 kubelet[2544]: E0307 01:16:50.092548 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:50.095581 kubelet[2544]: E0307 01:16:50.095345 2544 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-123-47\" already exists" pod="kube-system/kube-apiserver-172-236-123-47" Mar 7 01:16:50.095581 kubelet[2544]: E0307 01:16:50.095517 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:50.110234 kubelet[2544]: I0307 01:16:50.109438 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-123-47" podStartSLOduration=1.109422981 podStartE2EDuration="1.109422981s" podCreationTimestamp="2026-03-07 01:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:50.089252841 +0000 UTC m=+1.198459300" watchObservedRunningTime="2026-03-07 01:16:50.109422981 +0000 UTC m=+1.218629420" Mar 7 01:16:51.086086 kubelet[2544]: E0307 01:16:51.085030 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:51.086086 kubelet[2544]: E0307 01:16:51.085708 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:52.084708 kubelet[2544]: E0307 01:16:52.084658 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:53.281564 kubelet[2544]: E0307 01:16:53.281470 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:53.872236 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:16:54.273684 kubelet[2544]: I0307 01:16:54.273546 2544 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:16:54.274432 containerd[1467]: time="2026-03-07T01:16:54.274378122Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:16:54.274813 kubelet[2544]: I0307 01:16:54.274651 2544 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:16:54.785812 systemd[1]: Created slice kubepods-besteffort-podc6db8aef_60e9_4ac6_b82b_99026158a226.slice - libcontainer container kubepods-besteffort-podc6db8aef_60e9_4ac6_b82b_99026158a226.slice. Mar 7 01:16:54.842146 kubelet[2544]: I0307 01:16:54.841891 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6db8aef-60e9-4ac6-b82b-99026158a226-kube-proxy\") pod \"kube-proxy-dfvnf\" (UID: \"c6db8aef-60e9-4ac6-b82b-99026158a226\") " pod="kube-system/kube-proxy-dfvnf" Mar 7 01:16:54.842146 kubelet[2544]: I0307 01:16:54.841947 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6db8aef-60e9-4ac6-b82b-99026158a226-xtables-lock\") pod \"kube-proxy-dfvnf\" (UID: \"c6db8aef-60e9-4ac6-b82b-99026158a226\") " pod="kube-system/kube-proxy-dfvnf" Mar 7 01:16:54.842146 kubelet[2544]: I0307 01:16:54.841970 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6db8aef-60e9-4ac6-b82b-99026158a226-lib-modules\") pod \"kube-proxy-dfvnf\" (UID: \"c6db8aef-60e9-4ac6-b82b-99026158a226\") " pod="kube-system/kube-proxy-dfvnf" Mar 7 01:16:54.842146 kubelet[2544]: I0307 01:16:54.841993 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njfv\" (UniqueName: \"kubernetes.io/projected/c6db8aef-60e9-4ac6-b82b-99026158a226-kube-api-access-2njfv\") pod \"kube-proxy-dfvnf\" (UID: \"c6db8aef-60e9-4ac6-b82b-99026158a226\") " pod="kube-system/kube-proxy-dfvnf" Mar 7 01:16:54.952357 kubelet[2544]: E0307 01:16:54.952296 2544 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 01:16:54.952357 kubelet[2544]: E0307 01:16:54.952343 2544 projected.go:196] Error preparing data for projected volume kube-api-access-2njfv for pod kube-system/kube-proxy-dfvnf: configmap "kube-root-ca.crt" not found Mar 7 01:16:54.952642 kubelet[2544]: E0307 01:16:54.952434 2544 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6db8aef-60e9-4ac6-b82b-99026158a226-kube-api-access-2njfv podName:c6db8aef-60e9-4ac6-b82b-99026158a226 nodeName:}" failed. No retries permitted until 2026-03-07 01:16:55.452390361 +0000 UTC m=+6.561596800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2njfv" (UniqueName: "kubernetes.io/projected/c6db8aef-60e9-4ac6-b82b-99026158a226-kube-api-access-2njfv") pod "kube-proxy-dfvnf" (UID: "c6db8aef-60e9-4ac6-b82b-99026158a226") : configmap "kube-root-ca.crt" not found Mar 7 01:16:55.382406 systemd[1]: Created slice kubepods-besteffort-podd68250a7_d1f7_4588_af08_b671134d07dc.slice - libcontainer container kubepods-besteffort-podd68250a7_d1f7_4588_af08_b671134d07dc.slice. Mar 7 01:16:55.453370 kubelet[2544]: I0307 01:16:55.453274 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6krd\" (UniqueName: \"kubernetes.io/projected/d68250a7-d1f7-4588-af08-b671134d07dc-kube-api-access-z6krd\") pod \"tigera-operator-5588576f44-cdkwr\" (UID: \"d68250a7-d1f7-4588-af08-b671134d07dc\") " pod="tigera-operator/tigera-operator-5588576f44-cdkwr" Mar 7 01:16:55.453370 kubelet[2544]: I0307 01:16:55.453346 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d68250a7-d1f7-4588-af08-b671134d07dc-var-lib-calico\") pod \"tigera-operator-5588576f44-cdkwr\" (UID: \"d68250a7-d1f7-4588-af08-b671134d07dc\") " pod="tigera-operator/tigera-operator-5588576f44-cdkwr" Mar 7 01:16:55.691053 containerd[1467]: time="2026-03-07T01:16:55.690874030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-cdkwr,Uid:d68250a7-d1f7-4588-af08-b671134d07dc,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:16:55.696714 kubelet[2544]: E0307 01:16:55.696645 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:55.697583 containerd[1467]: time="2026-03-07T01:16:55.697447723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dfvnf,Uid:c6db8aef-60e9-4ac6-b82b-99026158a226,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:55.729505 containerd[1467]: time="2026-03-07T01:16:55.729240769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:55.729505 containerd[1467]: time="2026-03-07T01:16:55.729438129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:55.731556 containerd[1467]: time="2026-03-07T01:16:55.730905980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:55.731556 containerd[1467]: time="2026-03-07T01:16:55.731103770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:55.751770 containerd[1467]: time="2026-03-07T01:16:55.751680620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:55.752402 containerd[1467]: time="2026-03-07T01:16:55.752353091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:55.752533 containerd[1467]: time="2026-03-07T01:16:55.752508641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:55.752847 containerd[1467]: time="2026-03-07T01:16:55.752795401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:55.769463 systemd[1]: Started cri-containerd-984bad2e4a1e36bfb625d342d17f2ae128f1cf043482dcd3ba9c879e96e5f03d.scope - libcontainer container 984bad2e4a1e36bfb625d342d17f2ae128f1cf043482dcd3ba9c879e96e5f03d. Mar 7 01:16:55.778693 systemd[1]: Started cri-containerd-827e06fa407e158f1f684951f555ae9f5a624dfc401922fbbec9b7c5e49a1e3c.scope - libcontainer container 827e06fa407e158f1f684951f555ae9f5a624dfc401922fbbec9b7c5e49a1e3c. Mar 7 01:16:55.824254 containerd[1467]: time="2026-03-07T01:16:55.823904577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dfvnf,Uid:c6db8aef-60e9-4ac6-b82b-99026158a226,Namespace:kube-system,Attempt:0,} returns sandbox id \"827e06fa407e158f1f684951f555ae9f5a624dfc401922fbbec9b7c5e49a1e3c\"" Mar 7 01:16:55.827180 kubelet[2544]: E0307 01:16:55.825430 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:55.831942 containerd[1467]: time="2026-03-07T01:16:55.831892821Z" level=info msg="CreateContainer within sandbox \"827e06fa407e158f1f684951f555ae9f5a624dfc401922fbbec9b7c5e49a1e3c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:16:55.858092 containerd[1467]: time="2026-03-07T01:16:55.858018184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-cdkwr,Uid:d68250a7-d1f7-4588-af08-b671134d07dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"984bad2e4a1e36bfb625d342d17f2ae128f1cf043482dcd3ba9c879e96e5f03d\"" Mar 7 01:16:55.863416 containerd[1467]: time="2026-03-07T01:16:55.863358306Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:16:55.864711 containerd[1467]: time="2026-03-07T01:16:55.864677897Z" level=info msg="CreateContainer within sandbox \"827e06fa407e158f1f684951f555ae9f5a624dfc401922fbbec9b7c5e49a1e3c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc2af8889c5cd057994a5fc76a6d18a21062b59702547d89d2b3a5b569e2e0b9\"" Mar 7 01:16:55.866134 containerd[1467]: time="2026-03-07T01:16:55.866086558Z" level=info msg="StartContainer for \"cc2af8889c5cd057994a5fc76a6d18a21062b59702547d89d2b3a5b569e2e0b9\"" Mar 7 01:16:55.902569 systemd[1]: Started cri-containerd-cc2af8889c5cd057994a5fc76a6d18a21062b59702547d89d2b3a5b569e2e0b9.scope - libcontainer container cc2af8889c5cd057994a5fc76a6d18a21062b59702547d89d2b3a5b569e2e0b9. Mar 7 01:16:55.937228 containerd[1467]: time="2026-03-07T01:16:55.936949963Z" level=info msg="StartContainer for \"cc2af8889c5cd057994a5fc76a6d18a21062b59702547d89d2b3a5b569e2e0b9\" returns successfully" Mar 7 01:16:56.099670 kubelet[2544]: E0307 01:16:56.099521 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:56.955447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267749160.mount: Deactivated successfully. Mar 7 01:16:57.392646 kubelet[2544]: E0307 01:16:57.392320 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:57.405660 kubelet[2544]: I0307 01:16:57.405610 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dfvnf" podStartSLOduration=3.405594627 podStartE2EDuration="3.405594627s" podCreationTimestamp="2026-03-07 01:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:56.11186112 +0000 UTC m=+7.221067559" watchObservedRunningTime="2026-03-07 01:16:57.405594627 +0000 UTC m=+8.514801066" Mar 7 01:16:58.114535 kubelet[2544]: E0307 01:16:58.114493 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:59.116219 kubelet[2544]: E0307 01:16:59.116165 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:16:59.352049 containerd[1467]: time="2026-03-07T01:16:59.351995479Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:59.352952 containerd[1467]: time="2026-03-07T01:16:59.352781280Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:16:59.353513 containerd[1467]: time="2026-03-07T01:16:59.353483930Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:59.356572 containerd[1467]: time="2026-03-07T01:16:59.356546592Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:59.357698 containerd[1467]: time="2026-03-07T01:16:59.357666122Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.494264506s" Mar 7 01:16:59.357824 containerd[1467]: time="2026-03-07T01:16:59.357700642Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:16:59.361519 containerd[1467]: time="2026-03-07T01:16:59.361486524Z" level=info msg="CreateContainer within sandbox \"984bad2e4a1e36bfb625d342d17f2ae128f1cf043482dcd3ba9c879e96e5f03d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:16:59.373675 containerd[1467]: time="2026-03-07T01:16:59.370321069Z" level=info msg="CreateContainer within sandbox \"984bad2e4a1e36bfb625d342d17f2ae128f1cf043482dcd3ba9c879e96e5f03d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a7b2def73f37b6f9c6e8907f86ababa1e86ef650f54d3488c1fdcc1f3ec6ce16\"" Mar 7 01:16:59.373675 containerd[1467]: time="2026-03-07T01:16:59.372359250Z" level=info msg="StartContainer for \"a7b2def73f37b6f9c6e8907f86ababa1e86ef650f54d3488c1fdcc1f3ec6ce16\"" Mar 7 01:16:59.412366 systemd[1]: Started cri-containerd-a7b2def73f37b6f9c6e8907f86ababa1e86ef650f54d3488c1fdcc1f3ec6ce16.scope - libcontainer container a7b2def73f37b6f9c6e8907f86ababa1e86ef650f54d3488c1fdcc1f3ec6ce16. Mar 7 01:16:59.443569 containerd[1467]: time="2026-03-07T01:16:59.443524565Z" level=info msg="StartContainer for \"a7b2def73f37b6f9c6e8907f86ababa1e86ef650f54d3488c1fdcc1f3ec6ce16\" returns successfully" Mar 7 01:16:59.897375 kubelet[2544]: E0307 01:16:59.896903 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:00.121876 kubelet[2544]: E0307 01:17:00.121834 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:00.474499 systemd-timesyncd[1384]: Contacted time server [2600:3c02::f03c:94ff:fee2:cb31]:123 (2.flatcar.pool.ntp.org). Mar 7 01:17:00.474563 systemd-timesyncd[1384]: Initial clock synchronization to Sat 2026-03-07 01:17:00.735766 UTC. Mar 7 01:17:03.286605 kubelet[2544]: E0307 01:17:03.286553 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:03.296073 kubelet[2544]: I0307 01:17:03.295824 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-cdkwr" podStartSLOduration=4.7969680839999995 podStartE2EDuration="8.295810422s" podCreationTimestamp="2026-03-07 01:16:55 +0000 UTC" firstStartedPulling="2026-03-07 01:16:55.859901925 +0000 UTC m=+6.969108364" lastFinishedPulling="2026-03-07 01:16:59.358744263 +0000 UTC m=+10.467950702" observedRunningTime="2026-03-07 01:17:00.133382 +0000 UTC m=+11.242588439" watchObservedRunningTime="2026-03-07 01:17:03.295810422 +0000 UTC m=+14.405016862" Mar 7 01:17:05.248696 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 7 01:17:05.286467 sshd[1676]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:05.297764 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:17:05.298798 systemd[1]: sshd@6-172.236.123.47:22-68.220.241.50:48724.service: Deactivated successfully. Mar 7 01:17:05.305941 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:17:05.306426 systemd[1]: session-7.scope: Consumed 5.217s CPU time, 159.7M memory peak, 0B memory swap peak. Mar 7 01:17:05.309941 systemd-logind[1444]: Removed session 7. Mar 7 01:17:08.087261 systemd[1]: Created slice kubepods-besteffort-pode72618b4_0e51_46aa_9994_6b9bb81b9a9f.slice - libcontainer container kubepods-besteffort-pode72618b4_0e51_46aa_9994_6b9bb81b9a9f.slice. Mar 7 01:17:08.141028 kubelet[2544]: I0307 01:17:08.140954 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8gd\" (UniqueName: \"kubernetes.io/projected/e72618b4-0e51-46aa-9994-6b9bb81b9a9f-kube-api-access-fk8gd\") pod \"calico-typha-dcf9bcd64-nsvcq\" (UID: \"e72618b4-0e51-46aa-9994-6b9bb81b9a9f\") " pod="calico-system/calico-typha-dcf9bcd64-nsvcq" Mar 7 01:17:08.141028 kubelet[2544]: I0307 01:17:08.141016 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e72618b4-0e51-46aa-9994-6b9bb81b9a9f-typha-certs\") pod \"calico-typha-dcf9bcd64-nsvcq\" (UID: \"e72618b4-0e51-46aa-9994-6b9bb81b9a9f\") " pod="calico-system/calico-typha-dcf9bcd64-nsvcq" Mar 7 01:17:08.141028 kubelet[2544]: I0307 01:17:08.141035 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e72618b4-0e51-46aa-9994-6b9bb81b9a9f-tigera-ca-bundle\") pod \"calico-typha-dcf9bcd64-nsvcq\" (UID: \"e72618b4-0e51-46aa-9994-6b9bb81b9a9f\") " pod="calico-system/calico-typha-dcf9bcd64-nsvcq" Mar 7 01:17:08.164837 systemd[1]: Created slice kubepods-besteffort-podcae03a92_cd4f_4115_931e_907d6ae30eef.slice - libcontainer container kubepods-besteffort-podcae03a92_cd4f_4115_931e_907d6ae30eef.slice. Mar 7 01:17:08.241995 kubelet[2544]: I0307 01:17:08.241467 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-cni-bin-dir\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.241995 kubelet[2544]: I0307 01:17:08.241506 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-cni-log-dir\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.241995 kubelet[2544]: I0307 01:17:08.241520 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-cni-net-dir\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.241995 kubelet[2544]: I0307 01:17:08.241535 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cae03a92-cd4f-4115-931e-907d6ae30eef-node-certs\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.241995 kubelet[2544]: I0307 01:17:08.241548 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-nodeproc\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242531 kubelet[2544]: I0307 01:17:08.241571 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-lib-modules\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242531 kubelet[2544]: I0307 01:17:08.241583 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-policysync\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242531 kubelet[2544]: I0307 01:17:08.241597 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-var-lib-calico\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242531 kubelet[2544]: I0307 01:17:08.241610 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxlp\" (UniqueName: \"kubernetes.io/projected/cae03a92-cd4f-4115-931e-907d6ae30eef-kube-api-access-fdxlp\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242531 kubelet[2544]: I0307 01:17:08.241629 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-bpffs\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242639 kubelet[2544]: I0307 01:17:08.241642 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-sys-fs\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242639 kubelet[2544]: I0307 01:17:08.241660 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-flexvol-driver-host\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242639 kubelet[2544]: I0307 01:17:08.241675 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-xtables-lock\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242639 kubelet[2544]: I0307 01:17:08.241694 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cae03a92-cd4f-4115-931e-907d6ae30eef-tigera-ca-bundle\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.242639 kubelet[2544]: I0307 01:17:08.241740 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cae03a92-cd4f-4115-931e-907d6ae30eef-var-run-calico\") pod \"calico-node-b859p\" (UID: \"cae03a92-cd4f-4115-931e-907d6ae30eef\") " pod="calico-system/calico-node-b859p" Mar 7 01:17:08.308731 kubelet[2544]: E0307 01:17:08.308370 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjkhq" podUID="109d9e3b-df59-4367-b34e-f9e69ac61279" Mar 7 01:17:08.342924 kubelet[2544]: I0307 01:17:08.342777 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/109d9e3b-df59-4367-b34e-f9e69ac61279-socket-dir\") pod \"csi-node-driver-hjkhq\" (UID: \"109d9e3b-df59-4367-b34e-f9e69ac61279\") " pod="calico-system/csi-node-driver-hjkhq" Mar 7 01:17:08.342924 kubelet[2544]: I0307 01:17:08.342836 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwq9\" (UniqueName: \"kubernetes.io/projected/109d9e3b-df59-4367-b34e-f9e69ac61279-kube-api-access-scwq9\") pod \"csi-node-driver-hjkhq\" (UID: \"109d9e3b-df59-4367-b34e-f9e69ac61279\") " pod="calico-system/csi-node-driver-hjkhq" Mar 7 01:17:08.342924 kubelet[2544]: I0307 01:17:08.342888 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/109d9e3b-df59-4367-b34e-f9e69ac61279-kubelet-dir\") pod \"csi-node-driver-hjkhq\" (UID: \"109d9e3b-df59-4367-b34e-f9e69ac61279\") " pod="calico-system/csi-node-driver-hjkhq" Mar 7 01:17:08.342924 kubelet[2544]: I0307 01:17:08.342914 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/109d9e3b-df59-4367-b34e-f9e69ac61279-varrun\") pod \"csi-node-driver-hjkhq\" (UID: \"109d9e3b-df59-4367-b34e-f9e69ac61279\") " pod="calico-system/csi-node-driver-hjkhq" Mar 7 01:17:08.343178 kubelet[2544]: I0307 01:17:08.342946 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/109d9e3b-df59-4367-b34e-f9e69ac61279-registration-dir\") pod \"csi-node-driver-hjkhq\" (UID: \"109d9e3b-df59-4367-b34e-f9e69ac61279\") " pod="calico-system/csi-node-driver-hjkhq" Mar 7 01:17:08.344954 kubelet[2544]: E0307 01:17:08.344884 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.344954 kubelet[2544]: W0307 01:17:08.344924 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.344954 kubelet[2544]: E0307 01:17:08.344949 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.346783 kubelet[2544]: E0307 01:17:08.345824 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.346783 kubelet[2544]: W0307 01:17:08.345837 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.346783 kubelet[2544]: E0307 01:17:08.345866 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.346783 kubelet[2544]: E0307 01:17:08.346151 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.346783 kubelet[2544]: W0307 01:17:08.346160 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.346783 kubelet[2544]: E0307 01:17:08.346183 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.346783 kubelet[2544]: E0307 01:17:08.346455 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.346783 kubelet[2544]: W0307 01:17:08.346464 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.346783 kubelet[2544]: E0307 01:17:08.346473 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.347775 kubelet[2544]: E0307 01:17:08.347751 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.347775 kubelet[2544]: W0307 01:17:08.347769 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.347845 kubelet[2544]: E0307 01:17:08.347781 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.348502 kubelet[2544]: E0307 01:17:08.348479 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.348502 kubelet[2544]: W0307 01:17:08.348494 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.348502 kubelet[2544]: E0307 01:17:08.348505 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.349186 kubelet[2544]: E0307 01:17:08.349149 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.349186 kubelet[2544]: W0307 01:17:08.349163 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.349335 kubelet[2544]: E0307 01:17:08.349174 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.349948 kubelet[2544]: E0307 01:17:08.349926 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.349948 kubelet[2544]: W0307 01:17:08.349942 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.350022 kubelet[2544]: E0307 01:17:08.349953 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.350929 kubelet[2544]: E0307 01:17:08.350905 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.350929 kubelet[2544]: W0307 01:17:08.350919 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.350989 kubelet[2544]: E0307 01:17:08.350942 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.351401 kubelet[2544]: E0307 01:17:08.351371 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.351401 kubelet[2544]: W0307 01:17:08.351385 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.351401 kubelet[2544]: E0307 01:17:08.351395 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.351719 kubelet[2544]: E0307 01:17:08.351697 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.351719 kubelet[2544]: W0307 01:17:08.351711 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.351778 kubelet[2544]: E0307 01:17:08.351720 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.352110 kubelet[2544]: E0307 01:17:08.352092 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.352177 kubelet[2544]: W0307 01:17:08.352123 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.352177 kubelet[2544]: E0307 01:17:08.352134 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.352524 kubelet[2544]: E0307 01:17:08.352504 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.352524 kubelet[2544]: W0307 01:17:08.352518 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.352595 kubelet[2544]: E0307 01:17:08.352529 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.352883 kubelet[2544]: E0307 01:17:08.352864 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.352923 kubelet[2544]: W0307 01:17:08.352903 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.352923 kubelet[2544]: E0307 01:17:08.352915 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.353760 kubelet[2544]: E0307 01:17:08.353739 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.353760 kubelet[2544]: W0307 01:17:08.353754 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.353843 kubelet[2544]: E0307 01:17:08.353763 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.354309 kubelet[2544]: E0307 01:17:08.354192 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.354364 kubelet[2544]: W0307 01:17:08.354345 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.354364 kubelet[2544]: E0307 01:17:08.354360 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.355381 kubelet[2544]: E0307 01:17:08.355353 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.355452 kubelet[2544]: W0307 01:17:08.355385 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.355452 kubelet[2544]: E0307 01:17:08.355396 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.356454 kubelet[2544]: E0307 01:17:08.356432 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.356454 kubelet[2544]: W0307 01:17:08.356447 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.356531 kubelet[2544]: E0307 01:17:08.356458 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.357473 kubelet[2544]: E0307 01:17:08.357399 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.357473 kubelet[2544]: W0307 01:17:08.357411 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.357473 kubelet[2544]: E0307 01:17:08.357440 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.358188 kubelet[2544]: E0307 01:17:08.358175 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.358404 kubelet[2544]: W0307 01:17:08.358274 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.358404 kubelet[2544]: E0307 01:17:08.358294 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.359444 kubelet[2544]: E0307 01:17:08.359295 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.359444 kubelet[2544]: W0307 01:17:08.359306 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.359444 kubelet[2544]: E0307 01:17:08.359316 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.360567 kubelet[2544]: E0307 01:17:08.360447 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.360567 kubelet[2544]: W0307 01:17:08.360459 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.360567 kubelet[2544]: E0307 01:17:08.360470 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.362098 kubelet[2544]: E0307 01:17:08.361933 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.362098 kubelet[2544]: W0307 01:17:08.361945 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.362429 kubelet[2544]: E0307 01:17:08.361954 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.363238 kubelet[2544]: E0307 01:17:08.363165 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.363238 kubelet[2544]: W0307 01:17:08.363181 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.363624 kubelet[2544]: E0307 01:17:08.363195 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.364603 kubelet[2544]: E0307 01:17:08.364417 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.364603 kubelet[2544]: W0307 01:17:08.364433 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.364603 kubelet[2544]: E0307 01:17:08.364447 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.365405 kubelet[2544]: E0307 01:17:08.364730 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.365405 kubelet[2544]: W0307 01:17:08.364742 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.365405 kubelet[2544]: E0307 01:17:08.364755 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.366203 kubelet[2544]: E0307 01:17:08.366186 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.366335 kubelet[2544]: W0307 01:17:08.366315 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.366449 kubelet[2544]: E0307 01:17:08.366430 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.367193 kubelet[2544]: E0307 01:17:08.366788 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.367193 kubelet[2544]: W0307 01:17:08.366804 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.367193 kubelet[2544]: E0307 01:17:08.366816 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.367512 kubelet[2544]: E0307 01:17:08.367452 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.367512 kubelet[2544]: W0307 01:17:08.367469 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.367512 kubelet[2544]: E0307 01:17:08.367492 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.368558 kubelet[2544]: E0307 01:17:08.368425 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.368558 kubelet[2544]: W0307 01:17:08.368440 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.368558 kubelet[2544]: E0307 01:17:08.368454 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.369408 kubelet[2544]: E0307 01:17:08.369288 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.369408 kubelet[2544]: W0307 01:17:08.369303 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.369408 kubelet[2544]: E0307 01:17:08.369316 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.369932 kubelet[2544]: E0307 01:17:08.369792 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.369932 kubelet[2544]: W0307 01:17:08.369806 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.369932 kubelet[2544]: E0307 01:17:08.369818 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.370467 kubelet[2544]: E0307 01:17:08.370348 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.370467 kubelet[2544]: W0307 01:17:08.370362 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.370467 kubelet[2544]: E0307 01:17:08.370374 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.371517 kubelet[2544]: E0307 01:17:08.371304 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.371517 kubelet[2544]: W0307 01:17:08.371322 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.371517 kubelet[2544]: E0307 01:17:08.371335 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.373514 kubelet[2544]: E0307 01:17:08.373395 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.373514 kubelet[2544]: W0307 01:17:08.373410 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.373514 kubelet[2544]: E0307 01:17:08.373424 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.374720 kubelet[2544]: E0307 01:17:08.374575 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.374720 kubelet[2544]: W0307 01:17:08.374591 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.374720 kubelet[2544]: E0307 01:17:08.374606 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.375440 kubelet[2544]: E0307 01:17:08.375185 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.375440 kubelet[2544]: W0307 01:17:08.375214 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.375440 kubelet[2544]: E0307 01:17:08.375245 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.376171 kubelet[2544]: E0307 01:17:08.376019 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.376171 kubelet[2544]: W0307 01:17:08.376033 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.376171 kubelet[2544]: E0307 01:17:08.376046 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.377200 kubelet[2544]: E0307 01:17:08.376754 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.377200 kubelet[2544]: W0307 01:17:08.376768 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.377200 kubelet[2544]: E0307 01:17:08.376782 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.377613 kubelet[2544]: E0307 01:17:08.377523 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.377613 kubelet[2544]: W0307 01:17:08.377572 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.377613 kubelet[2544]: E0307 01:17:08.377586 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.379326 kubelet[2544]: E0307 01:17:08.379309 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.379409 kubelet[2544]: W0307 01:17:08.379392 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.379582 kubelet[2544]: E0307 01:17:08.379562 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.380453 kubelet[2544]: E0307 01:17:08.380394 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.380453 kubelet[2544]: W0307 01:17:08.380409 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.380453 kubelet[2544]: E0307 01:17:08.380425 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.381627 kubelet[2544]: E0307 01:17:08.381456 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.381627 kubelet[2544]: W0307 01:17:08.381473 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.381627 kubelet[2544]: E0307 01:17:08.381488 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.382064 kubelet[2544]: E0307 01:17:08.382047 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.383400 kubelet[2544]: W0307 01:17:08.383249 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.383400 kubelet[2544]: E0307 01:17:08.383271 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.383732 kubelet[2544]: E0307 01:17:08.383707 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.384066 kubelet[2544]: W0307 01:17:08.383795 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.384066 kubelet[2544]: E0307 01:17:08.383813 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.384893 kubelet[2544]: E0307 01:17:08.384677 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.384893 kubelet[2544]: W0307 01:17:08.384753 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.384893 kubelet[2544]: E0307 01:17:08.384768 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.385665 kubelet[2544]: E0307 01:17:08.385647 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.386442 kubelet[2544]: W0307 01:17:08.386304 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.386442 kubelet[2544]: E0307 01:17:08.386324 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.386833 kubelet[2544]: E0307 01:17:08.386712 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.386833 kubelet[2544]: W0307 01:17:08.386726 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.386833 kubelet[2544]: E0307 01:17:08.386738 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.388434 kubelet[2544]: E0307 01:17:08.388306 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.388434 kubelet[2544]: W0307 01:17:08.388321 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.388434 kubelet[2544]: E0307 01:17:08.388335 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.389506 kubelet[2544]: E0307 01:17:08.389368 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.389506 kubelet[2544]: W0307 01:17:08.389392 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.389506 kubelet[2544]: E0307 01:17:08.389408 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.389926 kubelet[2544]: E0307 01:17:08.389791 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.389926 kubelet[2544]: W0307 01:17:08.389805 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.389926 kubelet[2544]: E0307 01:17:08.389817 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.392617 kubelet[2544]: E0307 01:17:08.392490 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.392617 kubelet[2544]: W0307 01:17:08.392525 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.392617 kubelet[2544]: E0307 01:17:08.392537 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.392861 kubelet[2544]: E0307 01:17:08.392848 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.393566 kubelet[2544]: W0307 01:17:08.392902 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.393566 kubelet[2544]: E0307 01:17:08.392916 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.398949 kubelet[2544]: E0307 01:17:08.398925 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.399239 kubelet[2544]: W0307 01:17:08.399122 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.399547 kubelet[2544]: E0307 01:17:08.399284 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.400610 kubelet[2544]: E0307 01:17:08.400595 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.400722 kubelet[2544]: W0307 01:17:08.400671 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.400722 kubelet[2544]: E0307 01:17:08.400696 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.401260 kubelet[2544]: E0307 01:17:08.401166 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.401260 kubelet[2544]: W0307 01:17:08.401180 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.401260 kubelet[2544]: E0307 01:17:08.401194 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.402708 kubelet[2544]: E0307 01:17:08.402459 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:08.403939 containerd[1467]: time="2026-03-07T01:17:08.403894000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dcf9bcd64-nsvcq,Uid:e72618b4-0e51-46aa-9994-6b9bb81b9a9f,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:08.421280 kubelet[2544]: E0307 01:17:08.419791 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.421280 kubelet[2544]: W0307 01:17:08.421278 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.421430 kubelet[2544]: E0307 01:17:08.421303 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.444900 kubelet[2544]: E0307 01:17:08.444470 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.444900 kubelet[2544]: W0307 01:17:08.444629 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.444900 kubelet[2544]: E0307 01:17:08.444660 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.446492 kubelet[2544]: E0307 01:17:08.446314 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.446492 kubelet[2544]: W0307 01:17:08.446451 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.446492 kubelet[2544]: E0307 01:17:08.446471 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.447603 kubelet[2544]: E0307 01:17:08.447356 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.447603 kubelet[2544]: W0307 01:17:08.447373 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.447603 kubelet[2544]: E0307 01:17:08.447388 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.449294 kubelet[2544]: E0307 01:17:08.449199 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.449294 kubelet[2544]: W0307 01:17:08.449250 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.449294 kubelet[2544]: E0307 01:17:08.449266 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.450170 kubelet[2544]: E0307 01:17:08.450101 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.450170 kubelet[2544]: W0307 01:17:08.450116 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.450170 kubelet[2544]: E0307 01:17:08.450129 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.451080 kubelet[2544]: E0307 01:17:08.450910 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.451080 kubelet[2544]: W0307 01:17:08.450925 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.451080 kubelet[2544]: E0307 01:17:08.450975 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.452187 kubelet[2544]: E0307 01:17:08.451883 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.452187 kubelet[2544]: W0307 01:17:08.451897 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.452187 kubelet[2544]: E0307 01:17:08.451909 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.452187 kubelet[2544]: E0307 01:17:08.452148 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.452187 kubelet[2544]: W0307 01:17:08.452159 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.452187 kubelet[2544]: E0307 01:17:08.452172 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.452425 kubelet[2544]: E0307 01:17:08.452390 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.452425 kubelet[2544]: W0307 01:17:08.452398 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.452425 kubelet[2544]: E0307 01:17:08.452406 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.452833 kubelet[2544]: E0307 01:17:08.452595 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.452833 kubelet[2544]: W0307 01:17:08.452606 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.452833 kubelet[2544]: E0307 01:17:08.452614 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.452944 kubelet[2544]: E0307 01:17:08.452850 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.452944 kubelet[2544]: W0307 01:17:08.452859 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.452944 kubelet[2544]: E0307 01:17:08.452868 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.453402 kubelet[2544]: E0307 01:17:08.453173 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.453402 kubelet[2544]: W0307 01:17:08.453184 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.453402 kubelet[2544]: E0307 01:17:08.453196 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.453525 kubelet[2544]: E0307 01:17:08.453476 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.453525 kubelet[2544]: W0307 01:17:08.453486 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.453525 kubelet[2544]: E0307 01:17:08.453494 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.453978 kubelet[2544]: E0307 01:17:08.453778 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.453978 kubelet[2544]: W0307 01:17:08.453797 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.453978 kubelet[2544]: E0307 01:17:08.453807 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.454664 kubelet[2544]: E0307 01:17:08.454584 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.454664 kubelet[2544]: W0307 01:17:08.454596 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.454664 kubelet[2544]: E0307 01:17:08.454607 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.455392 kubelet[2544]: E0307 01:17:08.455323 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.455392 kubelet[2544]: W0307 01:17:08.455367 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.455392 kubelet[2544]: E0307 01:17:08.455380 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.456448 kubelet[2544]: E0307 01:17:08.456425 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.456448 kubelet[2544]: W0307 01:17:08.456437 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.456448 kubelet[2544]: E0307 01:17:08.456448 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.457755 kubelet[2544]: E0307 01:17:08.457599 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.457755 kubelet[2544]: W0307 01:17:08.457611 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.457755 kubelet[2544]: E0307 01:17:08.457626 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.458479 kubelet[2544]: E0307 01:17:08.458450 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.459109 kubelet[2544]: W0307 01:17:08.458464 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.459109 kubelet[2544]: E0307 01:17:08.459087 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.459905 containerd[1467]: time="2026-03-07T01:17:08.459785015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:08.460642 kubelet[2544]: E0307 01:17:08.460174 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.460642 kubelet[2544]: W0307 01:17:08.460186 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.460642 kubelet[2544]: E0307 01:17:08.460195 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.460792 containerd[1467]: time="2026-03-07T01:17:08.460615572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:08.461550 containerd[1467]: time="2026-03-07T01:17:08.461506464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:08.461834 kubelet[2544]: E0307 01:17:08.461810 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.461834 kubelet[2544]: W0307 01:17:08.461828 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.461913 kubelet[2544]: E0307 01:17:08.461839 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.462070 kubelet[2544]: E0307 01:17:08.462046 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.462122 kubelet[2544]: W0307 01:17:08.462068 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.462122 kubelet[2544]: E0307 01:17:08.462087 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.462581 containerd[1467]: time="2026-03-07T01:17:08.462051917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:08.463263 kubelet[2544]: E0307 01:17:08.463239 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.463263 kubelet[2544]: W0307 01:17:08.463256 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.463346 kubelet[2544]: E0307 01:17:08.463270 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.467273 kubelet[2544]: E0307 01:17:08.466337 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.467273 kubelet[2544]: W0307 01:17:08.466351 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.467273 kubelet[2544]: E0307 01:17:08.466363 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.467273 kubelet[2544]: E0307 01:17:08.466685 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.467273 kubelet[2544]: W0307 01:17:08.466698 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.467273 kubelet[2544]: E0307 01:17:08.466712 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.470723 kubelet[2544]: E0307 01:17:08.470686 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:08.470723 kubelet[2544]: W0307 01:17:08.470703 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:08.470723 kubelet[2544]: E0307 01:17:08.470717 2544 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:08.473006 containerd[1467]: time="2026-03-07T01:17:08.472960172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b859p,Uid:cae03a92-cd4f-4115-931e-907d6ae30eef,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:08.505178 systemd[1]: Started cri-containerd-f6737b8c775817c273bafe9f194fa98b146214eb3dbf24c59e6001934532d523.scope - libcontainer container f6737b8c775817c273bafe9f194fa98b146214eb3dbf24c59e6001934532d523. Mar 7 01:17:08.515275 containerd[1467]: time="2026-03-07T01:17:08.514692687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:08.515275 containerd[1467]: time="2026-03-07T01:17:08.514763373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:08.515275 containerd[1467]: time="2026-03-07T01:17:08.514775912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:08.515275 containerd[1467]: time="2026-03-07T01:17:08.514851976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:08.545487 systemd[1]: Started cri-containerd-753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c.scope - libcontainer container 753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c. Mar 7 01:17:08.599358 update_engine[1446]: I20260307 01:17:08.598248 1446 update_attempter.cc:509] Updating boot flags... Mar 7 01:17:08.634024 containerd[1467]: time="2026-03-07T01:17:08.633990768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b859p,Uid:cae03a92-cd4f-4115-931e-907d6ae30eef,Namespace:calico-system,Attempt:0,} returns sandbox id \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\"" Mar 7 01:17:08.641660 containerd[1467]: time="2026-03-07T01:17:08.641297992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:17:08.678471 containerd[1467]: time="2026-03-07T01:17:08.678422777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dcf9bcd64-nsvcq,Uid:e72618b4-0e51-46aa-9994-6b9bb81b9a9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6737b8c775817c273bafe9f194fa98b146214eb3dbf24c59e6001934532d523\"" Mar 7 01:17:08.682491 kubelet[2544]: E0307 01:17:08.682462 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:08.709408 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3125) Mar 7 01:17:08.824241 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3129) Mar 7 01:17:09.335472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount370579013.mount: Deactivated successfully. Mar 7 01:17:09.428027 containerd[1467]: time="2026-03-07T01:17:09.427965983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.429227 containerd[1467]: time="2026-03-07T01:17:09.429099078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 7 01:17:09.430090 containerd[1467]: time="2026-03-07T01:17:09.430047109Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.436312 containerd[1467]: time="2026-03-07T01:17:09.432870419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.436312 containerd[1467]: time="2026-03-07T01:17:09.433593380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 792.26313ms" Mar 7 01:17:09.436312 containerd[1467]: time="2026-03-07T01:17:09.433616082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:17:09.438397 containerd[1467]: time="2026-03-07T01:17:09.438364082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:17:09.442460 containerd[1467]: time="2026-03-07T01:17:09.442427669Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:17:09.457661 containerd[1467]: time="2026-03-07T01:17:09.457615825Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab\"" Mar 7 01:17:09.458474 containerd[1467]: time="2026-03-07T01:17:09.458174714Z" level=info msg="StartContainer for \"00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab\"" Mar 7 01:17:09.490356 systemd[1]: Started cri-containerd-00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab.scope - libcontainer container 00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab. Mar 7 01:17:09.525785 containerd[1467]: time="2026-03-07T01:17:09.525746666Z" level=info msg="StartContainer for \"00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab\" returns successfully" Mar 7 01:17:09.544756 systemd[1]: cri-containerd-00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab.scope: Deactivated successfully. Mar 7 01:17:09.616514 containerd[1467]: time="2026-03-07T01:17:09.616430895Z" level=info msg="shim disconnected" id=00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab namespace=k8s.io Mar 7 01:17:09.616514 containerd[1467]: time="2026-03-07T01:17:09.616495725Z" level=warning msg="cleaning up after shim disconnected" id=00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab namespace=k8s.io Mar 7 01:17:09.616514 containerd[1467]: time="2026-03-07T01:17:09.616506545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:10.031015 kubelet[2544]: E0307 01:17:10.029764 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjkhq" podUID="109d9e3b-df59-4367-b34e-f9e69ac61279" Mar 7 01:17:10.250182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00b197e7cc2193c0d6b568b4d1b05eb3a6b901ae5f32540d02ffb45fc2b2afab-rootfs.mount: Deactivated successfully. Mar 7 01:17:10.702805 containerd[1467]: time="2026-03-07T01:17:10.702758984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:10.703940 containerd[1467]: time="2026-03-07T01:17:10.703603796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 7 01:17:10.705982 containerd[1467]: time="2026-03-07T01:17:10.704496212Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:10.706982 containerd[1467]: time="2026-03-07T01:17:10.706647823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:10.708089 containerd[1467]: time="2026-03-07T01:17:10.707857952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.269456914s" Mar 7 01:17:10.708134 containerd[1467]: time="2026-03-07T01:17:10.708085385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:17:10.709354 containerd[1467]: time="2026-03-07T01:17:10.709338218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:17:10.729433 containerd[1467]: time="2026-03-07T01:17:10.729396967Z" level=info msg="CreateContainer within sandbox \"f6737b8c775817c273bafe9f194fa98b146214eb3dbf24c59e6001934532d523\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:17:10.738685 containerd[1467]: time="2026-03-07T01:17:10.738594446Z" level=info msg="CreateContainer within sandbox \"f6737b8c775817c273bafe9f194fa98b146214eb3dbf24c59e6001934532d523\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dd34e813fe8159b4b336f5349ac59e4b14ca4a9b194bc5e184071681d5f08041\"" Mar 7 01:17:10.741490 containerd[1467]: time="2026-03-07T01:17:10.740920333Z" level=info msg="StartContainer for \"dd34e813fe8159b4b336f5349ac59e4b14ca4a9b194bc5e184071681d5f08041\"" Mar 7 01:17:10.782353 systemd[1]: Started cri-containerd-dd34e813fe8159b4b336f5349ac59e4b14ca4a9b194bc5e184071681d5f08041.scope - libcontainer container dd34e813fe8159b4b336f5349ac59e4b14ca4a9b194bc5e184071681d5f08041. Mar 7 01:17:10.832472 containerd[1467]: time="2026-03-07T01:17:10.832379068Z" level=info msg="StartContainer for \"dd34e813fe8159b4b336f5349ac59e4b14ca4a9b194bc5e184071681d5f08041\" returns successfully" Mar 7 01:17:11.162783 kubelet[2544]: E0307 01:17:11.162473 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:11.172337 kubelet[2544]: I0307 01:17:11.172060 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dcf9bcd64-nsvcq" podStartSLOduration=1.15448082 podStartE2EDuration="3.172046318s" podCreationTimestamp="2026-03-07 01:17:08 +0000 UTC" firstStartedPulling="2026-03-07 01:17:08.691599757 +0000 UTC m=+19.800806196" lastFinishedPulling="2026-03-07 01:17:10.709165235 +0000 UTC m=+21.818371694" observedRunningTime="2026-03-07 01:17:11.171845335 +0000 UTC m=+22.281051804" watchObservedRunningTime="2026-03-07 01:17:11.172046318 +0000 UTC m=+22.281252757" Mar 7 01:17:12.029490 kubelet[2544]: E0307 01:17:12.029438 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjkhq" podUID="109d9e3b-df59-4367-b34e-f9e69ac61279" Mar 7 01:17:12.164027 kubelet[2544]: I0307 01:17:12.163877 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:12.164740 kubelet[2544]: E0307 01:17:12.164659 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:14.029855 kubelet[2544]: E0307 01:17:14.029678 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjkhq" podUID="109d9e3b-df59-4367-b34e-f9e69ac61279" Mar 7 01:17:14.621387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057973306.mount: Deactivated successfully. Mar 7 01:17:14.650407 containerd[1467]: time="2026-03-07T01:17:14.650274638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:14.651490 containerd[1467]: time="2026-03-07T01:17:14.651416921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:17:14.652309 containerd[1467]: time="2026-03-07T01:17:14.652254580Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:14.655896 containerd[1467]: time="2026-03-07T01:17:14.654706143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:14.655896 containerd[1467]: time="2026-03-07T01:17:14.655477995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.946058641s" Mar 7 01:17:14.655896 containerd[1467]: time="2026-03-07T01:17:14.655501670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:17:14.661488 containerd[1467]: time="2026-03-07T01:17:14.661321278Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:17:14.682193 containerd[1467]: time="2026-03-07T01:17:14.682144414Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49\"" Mar 7 01:17:14.683804 containerd[1467]: time="2026-03-07T01:17:14.683770057Z" level=info msg="StartContainer for \"d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49\"" Mar 7 01:17:14.732345 systemd[1]: Started cri-containerd-d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49.scope - libcontainer container d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49. Mar 7 01:17:14.776239 containerd[1467]: time="2026-03-07T01:17:14.776169667Z" level=info msg="StartContainer for \"d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49\" returns successfully" Mar 7 01:17:14.839665 systemd[1]: cri-containerd-d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49.scope: Deactivated successfully. Mar 7 01:17:14.945397 containerd[1467]: time="2026-03-07T01:17:14.945223713Z" level=info msg="shim disconnected" id=d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49 namespace=k8s.io Mar 7 01:17:14.945397 containerd[1467]: time="2026-03-07T01:17:14.945297083Z" level=warning msg="cleaning up after shim disconnected" id=d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49 namespace=k8s.io Mar 7 01:17:14.945397 containerd[1467]: time="2026-03-07T01:17:14.945311184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:15.172442 containerd[1467]: time="2026-03-07T01:17:15.172405829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:17:15.619520 systemd[1]: run-containerd-runc-k8s.io-d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49-runc.KPLzsK.mount: Deactivated successfully. Mar 7 01:17:15.619633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d49c9512fa811aee4504a4f332efa0cbd40cd53288882d7ae106b4d78f004b49-rootfs.mount: Deactivated successfully. Mar 7 01:17:16.030034 kubelet[2544]: E0307 01:17:16.029914 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hjkhq" podUID="109d9e3b-df59-4367-b34e-f9e69ac61279" Mar 7 01:17:16.927501 containerd[1467]: time="2026-03-07T01:17:16.927421520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:16.928398 containerd[1467]: time="2026-03-07T01:17:16.928336198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:17:16.929007 containerd[1467]: time="2026-03-07T01:17:16.928956693Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:16.933229 containerd[1467]: time="2026-03-07T01:17:16.931572541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:16.934268 containerd[1467]: time="2026-03-07T01:17:16.934246023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.761803787s" Mar 7 01:17:16.934349 containerd[1467]: time="2026-03-07T01:17:16.934334291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:17:16.943662 containerd[1467]: time="2026-03-07T01:17:16.943628148Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:17:16.964981 containerd[1467]: time="2026-03-07T01:17:16.964944919Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25\"" Mar 7 01:17:16.965895 containerd[1467]: time="2026-03-07T01:17:16.965864648Z" level=info msg="StartContainer for \"520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25\"" Mar 7 01:17:16.997766 systemd[1]: run-containerd-runc-k8s.io-520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25-runc.4I4OxX.mount: Deactivated successfully. Mar 7 01:17:17.008523 systemd[1]: Started cri-containerd-520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25.scope - libcontainer container 520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25. Mar 7 01:17:17.048810 containerd[1467]: time="2026-03-07T01:17:17.048718430Z" level=info msg="StartContainer for \"520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25\" returns successfully" Mar 7 01:17:17.637340 containerd[1467]: time="2026-03-07T01:17:17.637184722Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:17:17.642076 systemd[1]: cri-containerd-520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25.scope: Deactivated successfully. Mar 7 01:17:17.651393 kubelet[2544]: I0307 01:17:17.650726 2544 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:17:17.691822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25-rootfs.mount: Deactivated successfully. Mar 7 01:17:17.720531 systemd[1]: Created slice kubepods-besteffort-pod98864617_6487_4993_be29_029e002a44d6.slice - libcontainer container kubepods-besteffort-pod98864617_6487_4993_be29_029e002a44d6.slice. Mar 7 01:17:17.735778 systemd[1]: Created slice kubepods-burstable-poda5cfa2ca_87cd_4d73_a3e2_864a12def4e1.slice - libcontainer container kubepods-burstable-poda5cfa2ca_87cd_4d73_a3e2_864a12def4e1.slice. Mar 7 01:17:17.745433 containerd[1467]: time="2026-03-07T01:17:17.745355979Z" level=info msg="shim disconnected" id=520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25 namespace=k8s.io Mar 7 01:17:17.745433 containerd[1467]: time="2026-03-07T01:17:17.745426631Z" level=warning msg="cleaning up after shim disconnected" id=520d9fc1ac44181efc79e34653a1ead610d23f1eac1c807257078c38c4495f25 namespace=k8s.io Mar 7 01:17:17.745433 containerd[1467]: time="2026-03-07T01:17:17.745436077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:17.753511 systemd[1]: Created slice kubepods-burstable-podc20277e9_b30a_4317_a29f_090f637eb98b.slice - libcontainer container kubepods-burstable-podc20277e9_b30a_4317_a29f_090f637eb98b.slice. Mar 7 01:17:17.778093 systemd[1]: Created slice kubepods-besteffort-podef7273c6_8ac7_408c_aad0_60960cd76fb7.slice - libcontainer container kubepods-besteffort-podef7273c6_8ac7_408c_aad0_60960cd76fb7.slice. Mar 7 01:17:17.799477 systemd[1]: Created slice kubepods-besteffort-pod7cb646e4_d34c_4c2b_9a6e_cd8ff6644850.slice - libcontainer container kubepods-besteffort-pod7cb646e4_d34c_4c2b_9a6e_cd8ff6644850.slice. Mar 7 01:17:17.805073 containerd[1467]: time="2026-03-07T01:17:17.805032169Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:17:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:17:17.818990 systemd[1]: Created slice kubepods-besteffort-pod4df59194_badb_495f_a4b0_832c5bd3bb89.slice - libcontainer container kubepods-besteffort-pod4df59194_badb_495f_a4b0_832c5bd3bb89.slice. Mar 7 01:17:17.829294 systemd[1]: Created slice kubepods-besteffort-poddb500040_3011_4390_aa58_2e19f8e5b3b6.slice - libcontainer container kubepods-besteffort-poddb500040_3011_4390_aa58_2e19f8e5b3b6.slice. Mar 7 01:17:17.832921 kubelet[2544]: I0307 01:17:17.832612 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5cfa2ca-87cd-4d73-a3e2-864a12def4e1-config-volume\") pod \"coredns-66bc5c9577-m5smf\" (UID: \"a5cfa2ca-87cd-4d73-a3e2-864a12def4e1\") " pod="kube-system/coredns-66bc5c9577-m5smf" Mar 7 01:17:17.832921 kubelet[2544]: I0307 01:17:17.832662 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rj28\" (UniqueName: \"kubernetes.io/projected/a5cfa2ca-87cd-4d73-a3e2-864a12def4e1-kube-api-access-6rj28\") pod \"coredns-66bc5c9577-m5smf\" (UID: \"a5cfa2ca-87cd-4d73-a3e2-864a12def4e1\") " pod="kube-system/coredns-66bc5c9577-m5smf" Mar 7 01:17:17.832921 kubelet[2544]: I0307 01:17:17.832684 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ef7273c6-8ac7-408c-aad0-60960cd76fb7-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-mnqxd\" (UID: \"ef7273c6-8ac7-408c-aad0-60960cd76fb7\") " pod="calico-system/goldmane-cccfbd5cf-mnqxd" Mar 7 01:17:17.832921 kubelet[2544]: I0307 01:17:17.832699 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndthh\" (UniqueName: \"kubernetes.io/projected/c20277e9-b30a-4317-a29f-090f637eb98b-kube-api-access-ndthh\") pod \"coredns-66bc5c9577-w6vhn\" (UID: \"c20277e9-b30a-4317-a29f-090f637eb98b\") " pod="kube-system/coredns-66bc5c9577-w6vhn" Mar 7 01:17:17.832921 kubelet[2544]: I0307 01:17:17.832716 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-whisker-ca-bundle\") pod \"whisker-7b478cf965-dvnqr\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " pod="calico-system/whisker-7b478cf965-dvnqr" Mar 7 01:17:17.833580 kubelet[2544]: I0307 01:17:17.832736 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c20277e9-b30a-4317-a29f-090f637eb98b-config-volume\") pod \"coredns-66bc5c9577-w6vhn\" (UID: \"c20277e9-b30a-4317-a29f-090f637eb98b\") " pod="kube-system/coredns-66bc5c9577-w6vhn" Mar 7 01:17:17.833580 kubelet[2544]: I0307 01:17:17.832807 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db500040-3011-4390-aa58-2e19f8e5b3b6-calico-apiserver-certs\") pod \"calico-apiserver-cd7b6945c-6xktn\" (UID: \"db500040-3011-4390-aa58-2e19f8e5b3b6\") " pod="calico-system/calico-apiserver-cd7b6945c-6xktn" Mar 7 01:17:17.833580 kubelet[2544]: I0307 01:17:17.832834 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnxq\" (UniqueName: \"kubernetes.io/projected/db500040-3011-4390-aa58-2e19f8e5b3b6-kube-api-access-mbnxq\") pod \"calico-apiserver-cd7b6945c-6xktn\" (UID: \"db500040-3011-4390-aa58-2e19f8e5b3b6\") " pod="calico-system/calico-apiserver-cd7b6945c-6xktn" Mar 7 01:17:17.833580 kubelet[2544]: I0307 01:17:17.832851 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h752\" (UniqueName: \"kubernetes.io/projected/7cb646e4-d34c-4c2b-9a6e-cd8ff6644850-kube-api-access-7h752\") pod \"calico-apiserver-cd7b6945c-nn72l\" (UID: \"7cb646e4-d34c-4c2b-9a6e-cd8ff6644850\") " pod="calico-system/calico-apiserver-cd7b6945c-nn72l" Mar 7 01:17:17.833580 kubelet[2544]: I0307 01:17:17.832895 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df59194-badb-495f-a4b0-832c5bd3bb89-tigera-ca-bundle\") pod \"calico-kube-controllers-7f7d8f8f9f-h65cp\" (UID: \"4df59194-badb-495f-a4b0-832c5bd3bb89\") " pod="calico-system/calico-kube-controllers-7f7d8f8f9f-h65cp" Mar 7 01:17:17.833689 kubelet[2544]: I0307 01:17:17.832910 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kzn\" (UniqueName: \"kubernetes.io/projected/98864617-6487-4993-be29-029e002a44d6-kube-api-access-z9kzn\") pod \"whisker-7b478cf965-dvnqr\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " pod="calico-system/whisker-7b478cf965-dvnqr" Mar 7 01:17:17.833689 kubelet[2544]: I0307 01:17:17.832925 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4jm\" (UniqueName: \"kubernetes.io/projected/4df59194-badb-495f-a4b0-832c5bd3bb89-kube-api-access-kt4jm\") pod \"calico-kube-controllers-7f7d8f8f9f-h65cp\" (UID: \"4df59194-badb-495f-a4b0-832c5bd3bb89\") " pod="calico-system/calico-kube-controllers-7f7d8f8f9f-h65cp" Mar 7 01:17:17.833689 kubelet[2544]: I0307 01:17:17.832962 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7273c6-8ac7-408c-aad0-60960cd76fb7-config\") pod \"goldmane-cccfbd5cf-mnqxd\" (UID: \"ef7273c6-8ac7-408c-aad0-60960cd76fb7\") " pod="calico-system/goldmane-cccfbd5cf-mnqxd" Mar 7 01:17:17.833689 kubelet[2544]: I0307 01:17:17.832976 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef7273c6-8ac7-408c-aad0-60960cd76fb7-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-mnqxd\" (UID: \"ef7273c6-8ac7-408c-aad0-60960cd76fb7\") " pod="calico-system/goldmane-cccfbd5cf-mnqxd" Mar 7 01:17:17.833689 kubelet[2544]: I0307 01:17:17.832991 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-nginx-config\") pod \"whisker-7b478cf965-dvnqr\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " pod="calico-system/whisker-7b478cf965-dvnqr" Mar 7 01:17:17.834245 kubelet[2544]: I0307 01:17:17.833010 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98864617-6487-4993-be29-029e002a44d6-whisker-backend-key-pair\") pod \"whisker-7b478cf965-dvnqr\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " pod="calico-system/whisker-7b478cf965-dvnqr" Mar 7 01:17:17.834245 kubelet[2544]: I0307 01:17:17.833052 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cb646e4-d34c-4c2b-9a6e-cd8ff6644850-calico-apiserver-certs\") pod \"calico-apiserver-cd7b6945c-nn72l\" (UID: \"7cb646e4-d34c-4c2b-9a6e-cd8ff6644850\") " pod="calico-system/calico-apiserver-cd7b6945c-nn72l" Mar 7 01:17:17.834245 kubelet[2544]: I0307 01:17:17.833067 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8z2d\" (UniqueName: \"kubernetes.io/projected/ef7273c6-8ac7-408c-aad0-60960cd76fb7-kube-api-access-w8z2d\") pod \"goldmane-cccfbd5cf-mnqxd\" (UID: \"ef7273c6-8ac7-408c-aad0-60960cd76fb7\") " pod="calico-system/goldmane-cccfbd5cf-mnqxd" Mar 7 01:17:18.034995 containerd[1467]: time="2026-03-07T01:17:18.034615633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b478cf965-dvnqr,Uid:98864617-6487-4993-be29-029e002a44d6,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:18.037899 systemd[1]: Created slice kubepods-besteffort-pod109d9e3b_df59_4367_b34e_f9e69ac61279.slice - libcontainer container kubepods-besteffort-pod109d9e3b_df59_4367_b34e_f9e69ac61279.slice. Mar 7 01:17:18.043745 containerd[1467]: time="2026-03-07T01:17:18.043713929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjkhq,Uid:109d9e3b-df59-4367-b34e-f9e69ac61279,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:18.048365 kubelet[2544]: E0307 01:17:18.048338 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:18.050052 containerd[1467]: time="2026-03-07T01:17:18.049320682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m5smf,Uid:a5cfa2ca-87cd-4d73-a3e2-864a12def4e1,Namespace:kube-system,Attempt:0,}" Mar 7 01:17:18.083258 kubelet[2544]: E0307 01:17:18.081970 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:18.084547 containerd[1467]: time="2026-03-07T01:17:18.083911299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6vhn,Uid:c20277e9-b30a-4317-a29f-090f637eb98b,Namespace:kube-system,Attempt:0,}" Mar 7 01:17:18.096653 containerd[1467]: time="2026-03-07T01:17:18.096622686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-mnqxd,Uid:ef7273c6-8ac7-408c-aad0-60960cd76fb7,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:18.112168 containerd[1467]: time="2026-03-07T01:17:18.112098821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-nn72l,Uid:7cb646e4-d34c-4c2b-9a6e-cd8ff6644850,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:18.129600 containerd[1467]: time="2026-03-07T01:17:18.129345335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d8f8f9f-h65cp,Uid:4df59194-badb-495f-a4b0-832c5bd3bb89,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:18.145305 containerd[1467]: time="2026-03-07T01:17:18.145270732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-6xktn,Uid:db500040-3011-4390-aa58-2e19f8e5b3b6,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:18.257118 containerd[1467]: time="2026-03-07T01:17:18.257081872Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:17:18.320326 containerd[1467]: time="2026-03-07T01:17:18.319100515Z" level=info msg="CreateContainer within sandbox \"753d4cbd0c7818063b3fe9b0ff698c6dc6d96cd1a1fa7cf3cff1d270cd16ab9c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c3bd3d8d12f86da845c78ee2fdd4e93e77f347a1a6813af725dd70f89d797b25\"" Mar 7 01:17:18.320326 containerd[1467]: time="2026-03-07T01:17:18.320024746Z" level=info msg="StartContainer for \"c3bd3d8d12f86da845c78ee2fdd4e93e77f347a1a6813af725dd70f89d797b25\"" Mar 7 01:17:18.378917 containerd[1467]: time="2026-03-07T01:17:18.378872094Z" level=error msg="Failed to destroy network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.379380 containerd[1467]: time="2026-03-07T01:17:18.379347243Z" level=error msg="encountered an error cleaning up failed sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.379472 containerd[1467]: time="2026-03-07T01:17:18.379423751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjkhq,Uid:109d9e3b-df59-4367-b34e-f9e69ac61279,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.380609 kubelet[2544]: E0307 01:17:18.380153 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.380609 kubelet[2544]: E0307 01:17:18.380264 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hjkhq" Mar 7 01:17:18.380609 kubelet[2544]: E0307 01:17:18.380290 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hjkhq" Mar 7 01:17:18.380743 kubelet[2544]: E0307 01:17:18.380349 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hjkhq_calico-system(109d9e3b-df59-4367-b34e-f9e69ac61279)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hjkhq_calico-system(109d9e3b-df59-4367-b34e-f9e69ac61279)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hjkhq" podUID="109d9e3b-df59-4367-b34e-f9e69ac61279" Mar 7 01:17:18.399865 containerd[1467]: time="2026-03-07T01:17:18.399810334Z" level=error msg="Failed to destroy network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.401411 containerd[1467]: time="2026-03-07T01:17:18.400386925Z" level=error msg="encountered an error cleaning up failed sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.401411 containerd[1467]: time="2026-03-07T01:17:18.400439191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-mnqxd,Uid:ef7273c6-8ac7-408c-aad0-60960cd76fb7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.401528 kubelet[2544]: E0307 01:17:18.400644 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.401528 kubelet[2544]: E0307 01:17:18.400699 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-mnqxd" Mar 7 01:17:18.401528 kubelet[2544]: E0307 01:17:18.400719 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-mnqxd" Mar 7 01:17:18.401839 kubelet[2544]: E0307 01:17:18.400773 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-mnqxd_calico-system(ef7273c6-8ac7-408c-aad0-60960cd76fb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-mnqxd_calico-system(ef7273c6-8ac7-408c-aad0-60960cd76fb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-mnqxd" podUID="ef7273c6-8ac7-408c-aad0-60960cd76fb7" Mar 7 01:17:18.403645 containerd[1467]: time="2026-03-07T01:17:18.403432858Z" level=error msg="Failed to destroy network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.405374 containerd[1467]: time="2026-03-07T01:17:18.404513395Z" level=error msg="encountered an error cleaning up failed sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.405374 containerd[1467]: time="2026-03-07T01:17:18.404577150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m5smf,Uid:a5cfa2ca-87cd-4d73-a3e2-864a12def4e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.405474 kubelet[2544]: E0307 01:17:18.404832 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.405474 kubelet[2544]: E0307 01:17:18.404862 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m5smf" Mar 7 01:17:18.405474 kubelet[2544]: E0307 01:17:18.404880 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m5smf" Mar 7 01:17:18.405547 kubelet[2544]: E0307 01:17:18.404912 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m5smf_kube-system(a5cfa2ca-87cd-4d73-a3e2-864a12def4e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m5smf_kube-system(a5cfa2ca-87cd-4d73-a3e2-864a12def4e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m5smf" podUID="a5cfa2ca-87cd-4d73-a3e2-864a12def4e1" Mar 7 01:17:18.409455 containerd[1467]: time="2026-03-07T01:17:18.409429664Z" level=error msg="Failed to destroy network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.410096 containerd[1467]: time="2026-03-07T01:17:18.410071103Z" level=error msg="encountered an error cleaning up failed sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.410507 containerd[1467]: time="2026-03-07T01:17:18.410484525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b478cf965-dvnqr,Uid:98864617-6487-4993-be29-029e002a44d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.411115 kubelet[2544]: E0307 01:17:18.410950 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.411115 kubelet[2544]: E0307 01:17:18.410985 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b478cf965-dvnqr" Mar 7 01:17:18.411115 kubelet[2544]: E0307 01:17:18.411060 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b478cf965-dvnqr" Mar 7 01:17:18.411319 kubelet[2544]: E0307 01:17:18.411246 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b478cf965-dvnqr_calico-system(98864617-6487-4993-be29-029e002a44d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b478cf965-dvnqr_calico-system(98864617-6487-4993-be29-029e002a44d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b478cf965-dvnqr" podUID="98864617-6487-4993-be29-029e002a44d6" Mar 7 01:17:18.417918 containerd[1467]: time="2026-03-07T01:17:18.417607321Z" level=error msg="Failed to destroy network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.417974 containerd[1467]: time="2026-03-07T01:17:18.417943844Z" level=error msg="encountered an error cleaning up failed sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.418014 containerd[1467]: time="2026-03-07T01:17:18.417982544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d8f8f9f-h65cp,Uid:4df59194-badb-495f-a4b0-832c5bd3bb89,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.418306 kubelet[2544]: E0307 01:17:18.418160 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.418306 kubelet[2544]: E0307 01:17:18.418256 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7d8f8f9f-h65cp" Mar 7 01:17:18.418306 kubelet[2544]: E0307 01:17:18.418274 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7d8f8f9f-h65cp" Mar 7 01:17:18.418661 kubelet[2544]: E0307 01:17:18.418450 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f7d8f8f9f-h65cp_calico-system(4df59194-badb-495f-a4b0-832c5bd3bb89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f7d8f8f9f-h65cp_calico-system(4df59194-badb-495f-a4b0-832c5bd3bb89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f7d8f8f9f-h65cp" podUID="4df59194-badb-495f-a4b0-832c5bd3bb89" Mar 7 01:17:18.440195 systemd[1]: Started cri-containerd-c3bd3d8d12f86da845c78ee2fdd4e93e77f347a1a6813af725dd70f89d797b25.scope - libcontainer container c3bd3d8d12f86da845c78ee2fdd4e93e77f347a1a6813af725dd70f89d797b25. Mar 7 01:17:18.450764 containerd[1467]: time="2026-03-07T01:17:18.450629869Z" level=error msg="Failed to destroy network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.451155 containerd[1467]: time="2026-03-07T01:17:18.451128378Z" level=error msg="encountered an error cleaning up failed sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.451356 containerd[1467]: time="2026-03-07T01:17:18.451312126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-nn72l,Uid:7cb646e4-d34c-4c2b-9a6e-cd8ff6644850,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.452550 kubelet[2544]: E0307 01:17:18.451686 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.452550 kubelet[2544]: E0307 01:17:18.451763 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-cd7b6945c-nn72l" Mar 7 01:17:18.452550 kubelet[2544]: E0307 01:17:18.451784 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-cd7b6945c-nn72l" Mar 7 01:17:18.452678 kubelet[2544]: E0307 01:17:18.451826 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-cd7b6945c-nn72l_calico-system(7cb646e4-d34c-4c2b-9a6e-cd8ff6644850)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-cd7b6945c-nn72l_calico-system(7cb646e4-d34c-4c2b-9a6e-cd8ff6644850)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-cd7b6945c-nn72l" podUID="7cb646e4-d34c-4c2b-9a6e-cd8ff6644850" Mar 7 01:17:18.455155 containerd[1467]: time="2026-03-07T01:17:18.454695123Z" level=error msg="Failed to destroy network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.455155 containerd[1467]: time="2026-03-07T01:17:18.455145109Z" level=error msg="encountered an error cleaning up failed sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.455155 containerd[1467]: time="2026-03-07T01:17:18.455238714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6vhn,Uid:c20277e9-b30a-4317-a29f-090f637eb98b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.455935 kubelet[2544]: E0307 01:17:18.455511 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.455935 kubelet[2544]: E0307 01:17:18.455589 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w6vhn" Mar 7 01:17:18.455935 kubelet[2544]: E0307 01:17:18.455609 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w6vhn" Mar 7 01:17:18.456062 kubelet[2544]: E0307 01:17:18.455687 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w6vhn_kube-system(c20277e9-b30a-4317-a29f-090f637eb98b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w6vhn_kube-system(c20277e9-b30a-4317-a29f-090f637eb98b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w6vhn" podUID="c20277e9-b30a-4317-a29f-090f637eb98b" Mar 7 01:17:18.468583 containerd[1467]: time="2026-03-07T01:17:18.468474316Z" level=error msg="Failed to destroy network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.468930 containerd[1467]: time="2026-03-07T01:17:18.468905738Z" level=error msg="encountered an error cleaning up failed sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.469111 containerd[1467]: time="2026-03-07T01:17:18.469024137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-6xktn,Uid:db500040-3011-4390-aa58-2e19f8e5b3b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.469333 kubelet[2544]: E0307 01:17:18.469288 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:18.469400 kubelet[2544]: E0307 01:17:18.469348 2544 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-cd7b6945c-6xktn" Mar 7 01:17:18.469400 kubelet[2544]: E0307 01:17:18.469366 2544 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-cd7b6945c-6xktn" Mar 7 01:17:18.469447 kubelet[2544]: E0307 01:17:18.469416 2544 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-cd7b6945c-6xktn_calico-system(db500040-3011-4390-aa58-2e19f8e5b3b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-cd7b6945c-6xktn_calico-system(db500040-3011-4390-aa58-2e19f8e5b3b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-cd7b6945c-6xktn" podUID="db500040-3011-4390-aa58-2e19f8e5b3b6" Mar 7 01:17:18.495239 containerd[1467]: time="2026-03-07T01:17:18.494926346Z" level=info msg="StartContainer for \"c3bd3d8d12f86da845c78ee2fdd4e93e77f347a1a6813af725dd70f89d797b25\" returns successfully" Mar 7 01:17:19.232388 kubelet[2544]: I0307 01:17:19.231956 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:19.234576 containerd[1467]: time="2026-03-07T01:17:19.234545234Z" level=info msg="StopPodSandbox for \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\"" Mar 7 01:17:19.234824 containerd[1467]: time="2026-03-07T01:17:19.234677714Z" level=info msg="Ensure that sandbox cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383 in task-service has been cleanup successfully" Mar 7 01:17:19.238738 kubelet[2544]: I0307 01:17:19.237790 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:19.243283 containerd[1467]: time="2026-03-07T01:17:19.243003106Z" level=info msg="StopPodSandbox for \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\"" Mar 7 01:17:19.243283 containerd[1467]: time="2026-03-07T01:17:19.243133881Z" level=info msg="Ensure that sandbox b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6 in task-service has been cleanup successfully" Mar 7 01:17:19.246236 kubelet[2544]: I0307 01:17:19.245514 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:19.246926 containerd[1467]: time="2026-03-07T01:17:19.246885737Z" level=info msg="StopPodSandbox for \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\"" Mar 7 01:17:19.247066 containerd[1467]: time="2026-03-07T01:17:19.247045508Z" level=info msg="Ensure that sandbox c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261 in task-service has been cleanup successfully" Mar 7 01:17:19.253745 kubelet[2544]: I0307 01:17:19.253550 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:19.256137 containerd[1467]: time="2026-03-07T01:17:19.256011447Z" level=info msg="StopPodSandbox for \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\"" Mar 7 01:17:19.257097 containerd[1467]: time="2026-03-07T01:17:19.256152423Z" level=info msg="Ensure that sandbox 36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb in task-service has been cleanup successfully" Mar 7 01:17:19.261084 kubelet[2544]: I0307 01:17:19.261065 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:19.261895 containerd[1467]: time="2026-03-07T01:17:19.261872648Z" level=info msg="StopPodSandbox for \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\"" Mar 7 01:17:19.262126 containerd[1467]: time="2026-03-07T01:17:19.262107862Z" level=info msg="Ensure that sandbox 371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b in task-service has been cleanup successfully" Mar 7 01:17:19.269782 kubelet[2544]: I0307 01:17:19.269753 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:19.271756 containerd[1467]: time="2026-03-07T01:17:19.271724006Z" level=info msg="StopPodSandbox for \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\"" Mar 7 01:17:19.271931 containerd[1467]: time="2026-03-07T01:17:19.271914979Z" level=info msg="Ensure that sandbox 5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c in task-service has been cleanup successfully" Mar 7 01:17:19.285026 kubelet[2544]: I0307 01:17:19.285000 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:19.286980 containerd[1467]: time="2026-03-07T01:17:19.286940665Z" level=info msg="StopPodSandbox for \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\"" Mar 7 01:17:19.287259 containerd[1467]: time="2026-03-07T01:17:19.287239526Z" level=info msg="Ensure that sandbox da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917 in task-service has been cleanup successfully" Mar 7 01:17:19.290833 kubelet[2544]: I0307 01:17:19.290566 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:19.291370 containerd[1467]: time="2026-03-07T01:17:19.291086271Z" level=info msg="StopPodSandbox for \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\"" Mar 7 01:17:19.291370 containerd[1467]: time="2026-03-07T01:17:19.291269481Z" level=info msg="Ensure that sandbox 2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d in task-service has been cleanup successfully" Mar 7 01:17:19.313328 kubelet[2544]: I0307 01:17:19.311795 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b859p" podStartSLOduration=3.014793307 podStartE2EDuration="11.311771042s" podCreationTimestamp="2026-03-07 01:17:08 +0000 UTC" firstStartedPulling="2026-03-07 01:17:08.639586409 +0000 UTC m=+19.748792848" lastFinishedPulling="2026-03-07 01:17:16.936564144 +0000 UTC m=+28.045770583" observedRunningTime="2026-03-07 01:17:19.250968557 +0000 UTC m=+30.360175007" watchObservedRunningTime="2026-03-07 01:17:19.311771042 +0000 UTC m=+30.420977491" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.592 [INFO][3759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.592 [INFO][3759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" iface="eth0" netns="/var/run/netns/cni-aa72933e-c815-c35c-7713-401b69fa86d7" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.593 [INFO][3759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" iface="eth0" netns="/var/run/netns/cni-aa72933e-c815-c35c-7713-401b69fa86d7" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.593 [INFO][3759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" iface="eth0" netns="/var/run/netns/cni-aa72933e-c815-c35c-7713-401b69fa86d7" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.593 [INFO][3759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.594 [INFO][3759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.665 [INFO][3823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.673 [INFO][3823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.674 [INFO][3823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.702 [WARNING][3823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.702 [INFO][3823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.708 [INFO][3823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:19.749640 containerd[1467]: 2026-03-07 01:17:19.729 [INFO][3759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:19.759826 systemd[1]: run-netns-cni\x2daa72933e\x2dc815\x2dc35c\x2d7713\x2d401b69fa86d7.mount: Deactivated successfully. Mar 7 01:17:19.777094 containerd[1467]: time="2026-03-07T01:17:19.775572087Z" level=info msg="TearDown network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\" successfully" Mar 7 01:17:19.777094 containerd[1467]: time="2026-03-07T01:17:19.775626988Z" level=info msg="StopPodSandbox for \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\" returns successfully" Mar 7 01:17:19.793037 containerd[1467]: time="2026-03-07T01:17:19.792995006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjkhq,Uid:109d9e3b-df59-4367-b34e-f9e69ac61279,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.585 [INFO][3701] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.588 [INFO][3701] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" iface="eth0" netns="/var/run/netns/cni-257a78b2-0e03-6336-e66a-09f556d4ca17" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.588 [INFO][3701] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" iface="eth0" netns="/var/run/netns/cni-257a78b2-0e03-6336-e66a-09f556d4ca17" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.590 [INFO][3701] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" iface="eth0" netns="/var/run/netns/cni-257a78b2-0e03-6336-e66a-09f556d4ca17" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.590 [INFO][3701] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.590 [INFO][3701] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.761 [INFO][3821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.761 [INFO][3821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.761 [INFO][3821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.783 [WARNING][3821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.783 [INFO][3821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.791 [INFO][3821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:19.816321 containerd[1467]: 2026-03-07 01:17:19.799 [INFO][3701] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:19.821784 systemd[1]: run-netns-cni\x2d257a78b2\x2d0e03\x2d6336\x2de66a\x2d09f556d4ca17.mount: Deactivated successfully. Mar 7 01:17:19.825576 containerd[1467]: time="2026-03-07T01:17:19.825529725Z" level=info msg="TearDown network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\" successfully" Mar 7 01:17:19.825576 containerd[1467]: time="2026-03-07T01:17:19.825571548Z" level=info msg="StopPodSandbox for \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\" returns successfully" Mar 7 01:17:19.831169 containerd[1467]: time="2026-03-07T01:17:19.831118333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-nn72l,Uid:7cb646e4-d34c-4c2b-9a6e-cd8ff6644850,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.654 [INFO][3776] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.656 [INFO][3776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" iface="eth0" netns="/var/run/netns/cni-825ec004-ef77-4459-883f-0612b4be823d" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.661 [INFO][3776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" iface="eth0" netns="/var/run/netns/cni-825ec004-ef77-4459-883f-0612b4be823d" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.662 [INFO][3776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" iface="eth0" netns="/var/run/netns/cni-825ec004-ef77-4459-883f-0612b4be823d" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.664 [INFO][3776] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.665 [INFO][3776] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.841 [INFO][3850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.841 [INFO][3850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.841 [INFO][3850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.878 [WARNING][3850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.882 [INFO][3850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.889 [INFO][3850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:19.946017 containerd[1467]: 2026-03-07 01:17:19.919 [INFO][3776] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:19.946017 containerd[1467]: time="2026-03-07T01:17:19.945713637Z" level=info msg="TearDown network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\" successfully" Mar 7 01:17:19.946017 containerd[1467]: time="2026-03-07T01:17:19.945740807Z" level=info msg="StopPodSandbox for \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\" returns successfully" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.683 [INFO][3729] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.684 [INFO][3729] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" iface="eth0" netns="/var/run/netns/cni-f7e671fd-3986-23a9-0f44-13a568aded5f" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.684 [INFO][3729] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" iface="eth0" netns="/var/run/netns/cni-f7e671fd-3986-23a9-0f44-13a568aded5f" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.687 [INFO][3729] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" iface="eth0" netns="/var/run/netns/cni-f7e671fd-3986-23a9-0f44-13a568aded5f" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.687 [INFO][3729] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.687 [INFO][3729] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.836 [INFO][3852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.847 [INFO][3852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.891 [INFO][3852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.929 [WARNING][3852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.929 [INFO][3852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.933 [INFO][3852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:19.948601 containerd[1467]: 2026-03-07 01:17:19.939 [INFO][3729] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:19.949069 containerd[1467]: time="2026-03-07T01:17:19.948968939Z" level=info msg="TearDown network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\" successfully" Mar 7 01:17:19.949069 containerd[1467]: time="2026-03-07T01:17:19.948988036Z" level=info msg="StopPodSandbox for \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\" returns successfully" Mar 7 01:17:19.964245 systemd[1]: run-netns-cni\x2df7e671fd\x2d3986\x2d23a9\x2d0f44\x2d13a568aded5f.mount: Deactivated successfully. Mar 7 01:17:19.964358 systemd[1]: run-netns-cni\x2d825ec004\x2def77\x2d4459\x2d883f\x2d0612b4be823d.mount: Deactivated successfully. Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.624 [INFO][3708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.624 [INFO][3708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" iface="eth0" netns="/var/run/netns/cni-c2253dda-049f-4637-92d0-5a8284b5dc8b" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.625 [INFO][3708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" iface="eth0" netns="/var/run/netns/cni-c2253dda-049f-4637-92d0-5a8284b5dc8b" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.626 [INFO][3708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" iface="eth0" netns="/var/run/netns/cni-c2253dda-049f-4637-92d0-5a8284b5dc8b" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.626 [INFO][3708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.626 [INFO][3708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.883 [INFO][3832] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.911 [INFO][3832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.933 [INFO][3832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.954 [WARNING][3832] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.954 [INFO][3832] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.967 [INFO][3832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:19.998333 containerd[1467]: 2026-03-07 01:17:19.985 [INFO][3708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:19.998725 containerd[1467]: time="2026-03-07T01:17:19.998490512Z" level=info msg="TearDown network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\" successfully" Mar 7 01:17:19.998725 containerd[1467]: time="2026-03-07T01:17:19.998528313Z" level=info msg="StopPodSandbox for \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\" returns successfully" Mar 7 01:17:20.007396 systemd[1]: run-netns-cni\x2dc2253dda\x2d049f\x2d4637\x2d92d0\x2d5a8284b5dc8b.mount: Deactivated successfully. Mar 7 01:17:20.012414 containerd[1467]: time="2026-03-07T01:17:20.011183040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-6xktn,Uid:db500040-3011-4390-aa58-2e19f8e5b3b6,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:20.016043 containerd[1467]: time="2026-03-07T01:17:20.015963179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d8f8f9f-h65cp,Uid:4df59194-badb-495f-a4b0-832c5bd3bb89,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:20.018236 containerd[1467]: time="2026-03-07T01:17:20.018098146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-mnqxd,Uid:ef7273c6-8ac7-408c-aad0-60960cd76fb7,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.643 [INFO][3765] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.644 [INFO][3765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" iface="eth0" netns="/var/run/netns/cni-b8a28ad3-87e8-ce5a-77f2-83fc2ce7e34a" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.648 [INFO][3765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" iface="eth0" netns="/var/run/netns/cni-b8a28ad3-87e8-ce5a-77f2-83fc2ce7e34a" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.651 [INFO][3765] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" iface="eth0" netns="/var/run/netns/cni-b8a28ad3-87e8-ce5a-77f2-83fc2ce7e34a" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.651 [INFO][3765] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.651 [INFO][3765] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.901 [INFO][3840] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.910 [INFO][3840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.969 [INFO][3840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.988 [WARNING][3840] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.988 [INFO][3840] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:19.995 [INFO][3840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:20.033845 containerd[1467]: 2026-03-07 01:17:20.010 [INFO][3765] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:20.039662 containerd[1467]: time="2026-03-07T01:17:20.037641164Z" level=info msg="TearDown network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\" successfully" Mar 7 01:17:20.039662 containerd[1467]: time="2026-03-07T01:17:20.037666108Z" level=info msg="StopPodSandbox for \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\" returns successfully" Mar 7 01:17:20.041305 systemd[1]: run-netns-cni\x2db8a28ad3\x2d87e8\x2dce5a\x2d77f2\x2d83fc2ce7e34a.mount: Deactivated successfully. Mar 7 01:17:20.043327 kubelet[2544]: E0307 01:17:20.042802 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:20.043388 containerd[1467]: time="2026-03-07T01:17:20.043253962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m5smf,Uid:a5cfa2ca-87cd-4d73-a3e2-864a12def4e1,Namespace:kube-system,Attempt:1,}" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.647 [INFO][3794] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.651 [INFO][3794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" iface="eth0" netns="/var/run/netns/cni-414c11ed-10a0-cb61-7b3b-c5f724b1a9c5" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.652 [INFO][3794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" iface="eth0" netns="/var/run/netns/cni-414c11ed-10a0-cb61-7b3b-c5f724b1a9c5" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.657 [INFO][3794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" iface="eth0" netns="/var/run/netns/cni-414c11ed-10a0-cb61-7b3b-c5f724b1a9c5" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.657 [INFO][3794] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.657 [INFO][3794] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.927 [INFO][3842] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.927 [INFO][3842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:19.997 [INFO][3842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:20.027 [WARNING][3842] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:20.027 [INFO][3842] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:20.049 [INFO][3842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:20.072279 containerd[1467]: 2026-03-07 01:17:20.064 [INFO][3794] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:20.080637 containerd[1467]: time="2026-03-07T01:17:20.074801272Z" level=info msg="TearDown network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\" successfully" Mar 7 01:17:20.080637 containerd[1467]: time="2026-03-07T01:17:20.074820151Z" level=info msg="StopPodSandbox for \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\" returns successfully" Mar 7 01:17:20.078498 systemd[1]: run-netns-cni\x2d414c11ed\x2d10a0\x2dcb61\x2d7b3b\x2dc5f724b1a9c5.mount: Deactivated successfully. Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.635 [INFO][3764] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.636 [INFO][3764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" iface="eth0" netns="/var/run/netns/cni-1a41d4db-a7d2-7520-9ad1-449117321198" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.637 [INFO][3764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" iface="eth0" netns="/var/run/netns/cni-1a41d4db-a7d2-7520-9ad1-449117321198" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.638 [INFO][3764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" iface="eth0" netns="/var/run/netns/cni-1a41d4db-a7d2-7520-9ad1-449117321198" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.638 [INFO][3764] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.638 [INFO][3764] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.936 [INFO][3836] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:19.936 [INFO][3836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:20.049 [INFO][3836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:20.072 [WARNING][3836] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:20.083 [INFO][3836] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:20.098 [INFO][3836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:20.143584 containerd[1467]: 2026-03-07 01:17:20.119 [INFO][3764] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:20.144567 containerd[1467]: time="2026-03-07T01:17:20.144516913Z" level=info msg="TearDown network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\" successfully" Mar 7 01:17:20.144645 containerd[1467]: time="2026-03-07T01:17:20.144629092Z" level=info msg="StopPodSandbox for \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\" returns successfully" Mar 7 01:17:20.151725 kubelet[2544]: I0307 01:17:20.151699 2544 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-whisker-ca-bundle\") pod \"98864617-6487-4993-be29-029e002a44d6\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " Mar 7 01:17:20.152106 kubelet[2544]: I0307 01:17:20.152090 2544 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9kzn\" (UniqueName: \"kubernetes.io/projected/98864617-6487-4993-be29-029e002a44d6-kube-api-access-z9kzn\") pod \"98864617-6487-4993-be29-029e002a44d6\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " Mar 7 01:17:20.152242 kubelet[2544]: I0307 01:17:20.152228 2544 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-nginx-config\") pod \"98864617-6487-4993-be29-029e002a44d6\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " Mar 7 01:17:20.152765 kubelet[2544]: I0307 01:17:20.152748 2544 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98864617-6487-4993-be29-029e002a44d6-whisker-backend-key-pair\") pod \"98864617-6487-4993-be29-029e002a44d6\" (UID: \"98864617-6487-4993-be29-029e002a44d6\") " Mar 7 01:17:20.155289 kubelet[2544]: E0307 01:17:20.151797 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:20.159184 containerd[1467]: time="2026-03-07T01:17:20.158951635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6vhn,Uid:c20277e9-b30a-4317-a29f-090f637eb98b,Namespace:kube-system,Attempt:1,}" Mar 7 01:17:20.160574 kubelet[2544]: I0307 01:17:20.160291 2544 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "98864617-6487-4993-be29-029e002a44d6" (UID: "98864617-6487-4993-be29-029e002a44d6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:17:20.161653 kubelet[2544]: I0307 01:17:20.161599 2544 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "98864617-6487-4993-be29-029e002a44d6" (UID: "98864617-6487-4993-be29-029e002a44d6"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:17:20.181531 kubelet[2544]: I0307 01:17:20.181342 2544 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98864617-6487-4993-be29-029e002a44d6-kube-api-access-z9kzn" (OuterVolumeSpecName: "kube-api-access-z9kzn") pod "98864617-6487-4993-be29-029e002a44d6" (UID: "98864617-6487-4993-be29-029e002a44d6"). InnerVolumeSpecName "kube-api-access-z9kzn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:17:20.181531 kubelet[2544]: I0307 01:17:20.181453 2544 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98864617-6487-4993-be29-029e002a44d6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "98864617-6487-4993-be29-029e002a44d6" (UID: "98864617-6487-4993-be29-029e002a44d6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:17:20.254605 kubelet[2544]: I0307 01:17:20.254549 2544 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98864617-6487-4993-be29-029e002a44d6-whisker-backend-key-pair\") on node \"172-236-123-47\" DevicePath \"\"" Mar 7 01:17:20.255532 kubelet[2544]: I0307 01:17:20.255507 2544 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-whisker-ca-bundle\") on node \"172-236-123-47\" DevicePath \"\"" Mar 7 01:17:20.257324 kubelet[2544]: I0307 01:17:20.255644 2544 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z9kzn\" (UniqueName: \"kubernetes.io/projected/98864617-6487-4993-be29-029e002a44d6-kube-api-access-z9kzn\") on node \"172-236-123-47\" DevicePath \"\"" Mar 7 01:17:20.257324 kubelet[2544]: I0307 01:17:20.255659 2544 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/98864617-6487-4993-be29-029e002a44d6-nginx-config\") on node \"172-236-123-47\" DevicePath \"\"" Mar 7 01:17:20.368427 systemd[1]: Removed slice kubepods-besteffort-pod98864617_6487_4993_be29_029e002a44d6.slice - libcontainer container kubepods-besteffort-pod98864617_6487_4993_be29_029e002a44d6.slice. Mar 7 01:17:20.467846 systemd[1]: Created slice kubepods-besteffort-poddbd64a92_9c62_476d_9ca1_dc1c5857a76f.slice - libcontainer container kubepods-besteffort-poddbd64a92_9c62_476d_9ca1_dc1c5857a76f.slice. Mar 7 01:17:20.561016 kubelet[2544]: I0307 01:17:20.560838 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbd64a92-9c62-476d-9ca1-dc1c5857a76f-whisker-ca-bundle\") pod \"whisker-5c88b4c946-vh79t\" (UID: \"dbd64a92-9c62-476d-9ca1-dc1c5857a76f\") " pod="calico-system/whisker-5c88b4c946-vh79t" Mar 7 01:17:20.561016 kubelet[2544]: I0307 01:17:20.560935 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q864v\" (UniqueName: \"kubernetes.io/projected/dbd64a92-9c62-476d-9ca1-dc1c5857a76f-kube-api-access-q864v\") pod \"whisker-5c88b4c946-vh79t\" (UID: \"dbd64a92-9c62-476d-9ca1-dc1c5857a76f\") " pod="calico-system/whisker-5c88b4c946-vh79t" Mar 7 01:17:20.561016 kubelet[2544]: I0307 01:17:20.560959 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbd64a92-9c62-476d-9ca1-dc1c5857a76f-nginx-config\") pod \"whisker-5c88b4c946-vh79t\" (UID: \"dbd64a92-9c62-476d-9ca1-dc1c5857a76f\") " pod="calico-system/whisker-5c88b4c946-vh79t" Mar 7 01:17:20.561358 kubelet[2544]: I0307 01:17:20.561080 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbd64a92-9c62-476d-9ca1-dc1c5857a76f-whisker-backend-key-pair\") pod \"whisker-5c88b4c946-vh79t\" (UID: \"dbd64a92-9c62-476d-9ca1-dc1c5857a76f\") " pod="calico-system/whisker-5c88b4c946-vh79t" Mar 7 01:17:20.786841 containerd[1467]: time="2026-03-07T01:17:20.786718911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c88b4c946-vh79t,Uid:dbd64a92-9c62-476d-9ca1-dc1c5857a76f,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:20.817431 systemd-networkd[1381]: cali59e19621222: Link UP Mar 7 01:17:20.823993 systemd-networkd[1381]: cali59e19621222: Gained carrier Mar 7 01:17:20.937013 systemd-networkd[1381]: cali16f77c12f55: Link UP Mar 7 01:17:20.940547 systemd-networkd[1381]: cali16f77c12f55: Gained carrier Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.027 [ERROR][3902] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.063 [INFO][3902] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0 calico-apiserver-cd7b6945c- calico-system 7cb646e4-d34c-4c2b-9a6e-cd8ff6644850 925 0 2026-03-07 01:17:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cd7b6945c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-123-47 calico-apiserver-cd7b6945c-nn72l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali59e19621222 [] [] }} ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.068 [INFO][3902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.264 [INFO][3949] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" HandleID="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.330 [INFO][3949] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" HandleID="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000395d10), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-123-47", "pod":"calico-apiserver-cd7b6945c-nn72l", "timestamp":"2026-03-07 01:17:20.264845582 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003998c0)} Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.332 [INFO][3949] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.334 [INFO][3949] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.335 [INFO][3949] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.361 [INFO][3949] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.412 [INFO][3949] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.469 [INFO][3949] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.520 [INFO][3949] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.617 [INFO][3949] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.617 [INFO][3949] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.644 [INFO][3949] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.685 [INFO][3949] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.710 [INFO][3949] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.193/26] block=192.168.97.192/26 handle="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.710 [INFO][3949] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.193/26] handle="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" host="172-236-123-47" Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.710 [INFO][3949] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:20.959758 containerd[1467]: 2026-03-07 01:17:20.711 [INFO][3949] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.193/26] IPv6=[] ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" HandleID="k8s-pod-network.b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:20.963736 containerd[1467]: 2026-03-07 01:17:20.742 [INFO][3902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"7cb646e4-d34c-4c2b-9a6e-cd8ff6644850", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"calico-apiserver-cd7b6945c-nn72l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali59e19621222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:20.963736 containerd[1467]: 2026-03-07 01:17:20.742 [INFO][3902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.193/32] ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:20.963736 containerd[1467]: 2026-03-07 01:17:20.742 [INFO][3902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59e19621222 ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:20.963736 containerd[1467]: 2026-03-07 01:17:20.842 [INFO][3902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:20.963736 containerd[1467]: 2026-03-07 01:17:20.844 [INFO][3902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"7cb646e4-d34c-4c2b-9a6e-cd8ff6644850", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d", Pod:"calico-apiserver-cd7b6945c-nn72l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali59e19621222", MAC:"92:d0:9e:f5:d0:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:20.963736 containerd[1467]: 2026-03-07 01:17:20.918 [INFO][3902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-nn72l" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:20.983430 systemd[1]: run-netns-cni\x2d1a41d4db\x2da7d2\x2d7520\x2d9ad1\x2d449117321198.mount: Deactivated successfully. Mar 7 01:17:20.983546 systemd[1]: var-lib-kubelet-pods-98864617\x2d6487\x2d4993\x2dbe29\x2d029e002a44d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz9kzn.mount: Deactivated successfully. Mar 7 01:17:20.983628 systemd[1]: var-lib-kubelet-pods-98864617\x2d6487\x2d4993\x2dbe29\x2d029e002a44d6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:17:21.036216 kubelet[2544]: I0307 01:17:21.036151 2544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98864617-6487-4993-be29-029e002a44d6" path="/var/lib/kubelet/pods/98864617-6487-4993-be29-029e002a44d6/volumes" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.051 [ERROR][3882] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.132 [INFO][3882] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-csi--node--driver--hjkhq-eth0 csi-node-driver- calico-system 109d9e3b-df59-4367-b34e-f9e69ac61279 926 0 2026-03-07 01:17:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-123-47 csi-node-driver-hjkhq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali16f77c12f55 [] [] }} ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.133 [INFO][3882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.528 [INFO][3990] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" HandleID="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.640 [INFO][3990] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" HandleID="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388110), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-123-47", "pod":"csi-node-driver-hjkhq", "timestamp":"2026-03-07 01:17:20.528275527 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192580)} Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.640 [INFO][3990] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.711 [INFO][3990] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.713 [INFO][3990] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.734 [INFO][3990] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.760 [INFO][3990] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.823 [INFO][3990] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.835 [INFO][3990] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.845 [INFO][3990] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.845 [INFO][3990] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.874 [INFO][3990] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.885 [INFO][3990] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.898 [INFO][3990] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.194/26] block=192.168.97.192/26 handle="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.898 [INFO][3990] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.194/26] handle="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" host="172-236-123-47" Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.898 [INFO][3990] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:21.045909 containerd[1467]: 2026-03-07 01:17:20.898 [INFO][3990] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.194/26] IPv6=[] ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" HandleID="k8s-pod-network.a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:21.049302 containerd[1467]: 2026-03-07 01:17:20.918 [INFO][3882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-csi--node--driver--hjkhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"109d9e3b-df59-4367-b34e-f9e69ac61279", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"csi-node-driver-hjkhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16f77c12f55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.049302 containerd[1467]: 2026-03-07 01:17:20.918 [INFO][3882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.194/32] ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:21.049302 containerd[1467]: 2026-03-07 01:17:20.918 [INFO][3882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16f77c12f55 ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:21.049302 containerd[1467]: 2026-03-07 01:17:20.950 [INFO][3882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:21.049302 containerd[1467]: 2026-03-07 01:17:20.959 [INFO][3882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-csi--node--driver--hjkhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"109d9e3b-df59-4367-b34e-f9e69ac61279", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b", Pod:"csi-node-driver-hjkhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16f77c12f55", MAC:"8a:81:31:78:87:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.049302 containerd[1467]: 2026-03-07 01:17:21.026 [INFO][3882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b" Namespace="calico-system" Pod="csi-node-driver-hjkhq" WorkloadEndpoint="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:21.106634 containerd[1467]: time="2026-03-07T01:17:21.106101460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:21.106634 containerd[1467]: time="2026-03-07T01:17:21.106188937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:21.106634 containerd[1467]: time="2026-03-07T01:17:21.106233097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.106634 containerd[1467]: time="2026-03-07T01:17:21.106374848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.172032 systemd-networkd[1381]: cali8e4edca80f8: Link UP Mar 7 01:17:21.177544 containerd[1467]: time="2026-03-07T01:17:21.174813202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:21.177544 containerd[1467]: time="2026-03-07T01:17:21.174865510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:21.177544 containerd[1467]: time="2026-03-07T01:17:21.174879542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.177544 containerd[1467]: time="2026-03-07T01:17:21.177312695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.188342 systemd-networkd[1381]: cali8e4edca80f8: Gained carrier Mar 7 01:17:21.200363 systemd[1]: Started cri-containerd-b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d.scope - libcontainer container b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d. Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.487 [ERROR][3967] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.686 [INFO][3967] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0 calico-kube-controllers-7f7d8f8f9f- calico-system 4df59194-badb-495f-a4b0-832c5bd3bb89 931 0 2026-03-07 01:17:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f7d8f8f9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-123-47 calico-kube-controllers-7f7d8f8f9f-h65cp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8e4edca80f8 [] [] }} ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.686 [INFO][3967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.872 [INFO][4100] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" HandleID="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.901 [INFO][4100] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" HandleID="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fa1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-123-47", "pod":"calico-kube-controllers-7f7d8f8f9f-h65cp", "timestamp":"2026-03-07 01:17:20.87267053 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000384580)} Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.901 [INFO][4100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.901 [INFO][4100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.901 [INFO][4100] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.916 [INFO][4100] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.953 [INFO][4100] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:20.994 [INFO][4100] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.026 [INFO][4100] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.035 [INFO][4100] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.035 [INFO][4100] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.043 [INFO][4100] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2 Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.074 [INFO][4100] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.107 [INFO][4100] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.195/26] block=192.168.97.192/26 handle="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.107 [INFO][4100] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.195/26] handle="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" host="172-236-123-47" Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.107 [INFO][4100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:21.256350 containerd[1467]: 2026-03-07 01:17:21.107 [INFO][4100] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.195/26] IPv6=[] ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" HandleID="k8s-pod-network.8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:21.257442 containerd[1467]: 2026-03-07 01:17:21.137 [INFO][3967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0", GenerateName:"calico-kube-controllers-7f7d8f8f9f-", Namespace:"calico-system", SelfLink:"", UID:"4df59194-badb-495f-a4b0-832c5bd3bb89", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7d8f8f9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"calico-kube-controllers-7f7d8f8f9f-h65cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e4edca80f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.257442 containerd[1467]: 2026-03-07 01:17:21.138 [INFO][3967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.195/32] ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:21.257442 containerd[1467]: 2026-03-07 01:17:21.138 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e4edca80f8 ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:21.257442 containerd[1467]: 2026-03-07 01:17:21.190 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:21.257442 containerd[1467]: 2026-03-07 01:17:21.199 [INFO][3967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0", GenerateName:"calico-kube-controllers-7f7d8f8f9f-", Namespace:"calico-system", SelfLink:"", UID:"4df59194-badb-495f-a4b0-832c5bd3bb89", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7d8f8f9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2", Pod:"calico-kube-controllers-7f7d8f8f9f-h65cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e4edca80f8", MAC:"52:e6:ec:e7:a6:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.257442 containerd[1467]: 2026-03-07 01:17:21.240 [INFO][3967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2" Namespace="calico-system" Pod="calico-kube-controllers-7f7d8f8f9f-h65cp" WorkloadEndpoint="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:21.275846 systemd[1]: Started cri-containerd-a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b.scope - libcontainer container a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b. Mar 7 01:17:21.280317 systemd-networkd[1381]: cali5dcafb17ad6: Link UP Mar 7 01:17:21.286535 systemd-networkd[1381]: cali5dcafb17ad6: Gained carrier Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:20.512 [ERROR][4028] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:20.655 [INFO][4028] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0 coredns-66bc5c9577- kube-system c20277e9-b30a-4317-a29f-090f637eb98b 928 0 2026-03-07 01:16:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-123-47 coredns-66bc5c9577-w6vhn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5dcafb17ad6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:20.655 [INFO][4028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:20.892 [INFO][4098] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" HandleID="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:20.939 [INFO][4098] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" HandleID="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdea0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-123-47", "pod":"coredns-66bc5c9577-w6vhn", "timestamp":"2026-03-07 01:17:20.89205905 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000396580)} Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:20.939 [INFO][4098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.108 [INFO][4098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.108 [INFO][4098] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.112 [INFO][4098] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.138 [INFO][4098] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.174 [INFO][4098] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.192 [INFO][4098] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.204 [INFO][4098] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.204 [INFO][4098] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.214 [INFO][4098] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.233 [INFO][4098] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.249 [INFO][4098] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.196/26] block=192.168.97.192/26 handle="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.249 [INFO][4098] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.196/26] handle="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" host="172-236-123-47" Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.249 [INFO][4098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:21.341306 containerd[1467]: 2026-03-07 01:17:21.249 [INFO][4098] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.196/26] IPv6=[] ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" HandleID="k8s-pod-network.b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:21.341864 containerd[1467]: 2026-03-07 01:17:21.265 [INFO][4028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c20277e9-b30a-4317-a29f-090f637eb98b", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"coredns-66bc5c9577-w6vhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dcafb17ad6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.341864 containerd[1467]: 2026-03-07 01:17:21.265 [INFO][4028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.196/32] ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:21.341864 containerd[1467]: 2026-03-07 01:17:21.265 [INFO][4028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dcafb17ad6 ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:21.341864 containerd[1467]: 2026-03-07 01:17:21.288 [INFO][4028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:21.341864 containerd[1467]: 2026-03-07 01:17:21.292 [INFO][4028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c20277e9-b30a-4317-a29f-090f637eb98b", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe", Pod:"coredns-66bc5c9577-w6vhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dcafb17ad6", MAC:"d6:40:67:82:dd:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.341864 containerd[1467]: 2026-03-07 01:17:21.327 [INFO][4028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe" Namespace="kube-system" Pod="coredns-66bc5c9577-w6vhn" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:21.397946 containerd[1467]: time="2026-03-07T01:17:21.390649107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:21.397946 containerd[1467]: time="2026-03-07T01:17:21.390719386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:21.397946 containerd[1467]: time="2026-03-07T01:17:21.390730881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.397946 containerd[1467]: time="2026-03-07T01:17:21.390816234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.398791 systemd-networkd[1381]: calib41b8cf5537: Link UP Mar 7 01:17:21.407276 systemd-networkd[1381]: calib41b8cf5537: Gained carrier Mar 7 01:17:21.478723 systemd[1]: Started cri-containerd-8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2.scope - libcontainer container 8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2. Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:20.494 [ERROR][3948] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:20.654 [INFO][3948] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0 calico-apiserver-cd7b6945c- calico-system db500040-3011-4390-aa58-2e19f8e5b3b6 932 0 2026-03-07 01:17:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cd7b6945c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-123-47 calico-apiserver-cd7b6945c-6xktn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib41b8cf5537 [] [] }} ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:20.655 [INFO][3948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:20.974 [INFO][4095] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" HandleID="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.031 [INFO][4095] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" HandleID="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f100), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-123-47", "pod":"calico-apiserver-cd7b6945c-6xktn", "timestamp":"2026-03-07 01:17:20.974675496 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00042e6e0)} Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.031 [INFO][4095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.249 [INFO][4095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.250 [INFO][4095] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.252 [INFO][4095] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.264 [INFO][4095] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.284 [INFO][4095] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.291 [INFO][4095] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.295 [INFO][4095] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.295 [INFO][4095] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.298 [INFO][4095] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.324 [INFO][4095] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.343 [INFO][4095] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.197/26] block=192.168.97.192/26 handle="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.343 [INFO][4095] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.197/26] handle="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" host="172-236-123-47" Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.343 [INFO][4095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:21.494320 containerd[1467]: 2026-03-07 01:17:21.343 [INFO][4095] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.197/26] IPv6=[] ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" HandleID="k8s-pod-network.40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:21.494927 containerd[1467]: 2026-03-07 01:17:21.364 [INFO][3948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"db500040-3011-4390-aa58-2e19f8e5b3b6", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"calico-apiserver-cd7b6945c-6xktn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib41b8cf5537", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.494927 containerd[1467]: 2026-03-07 01:17:21.365 [INFO][3948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.197/32] ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:21.494927 containerd[1467]: 2026-03-07 01:17:21.365 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib41b8cf5537 ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:21.494927 containerd[1467]: 2026-03-07 01:17:21.403 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:21.494927 containerd[1467]: 2026-03-07 01:17:21.417 [INFO][3948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"db500040-3011-4390-aa58-2e19f8e5b3b6", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f", Pod:"calico-apiserver-cd7b6945c-6xktn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib41b8cf5537", MAC:"da:b4:37:ed:d7:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.494927 containerd[1467]: 2026-03-07 01:17:21.453 [INFO][3948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f" Namespace="calico-system" Pod="calico-apiserver-cd7b6945c-6xktn" WorkloadEndpoint="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:21.555121 containerd[1467]: time="2026-03-07T01:17:21.555042570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hjkhq,Uid:109d9e3b-df59-4367-b34e-f9e69ac61279,Namespace:calico-system,Attempt:1,} returns sandbox id \"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b\"" Mar 7 01:17:21.561670 containerd[1467]: time="2026-03-07T01:17:21.560096451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:21.561670 containerd[1467]: time="2026-03-07T01:17:21.560152758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:21.561670 containerd[1467]: time="2026-03-07T01:17:21.560173705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.561670 containerd[1467]: time="2026-03-07T01:17:21.560305212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.566871 containerd[1467]: time="2026-03-07T01:17:21.566613154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:17:21.580917 systemd-networkd[1381]: cali784d0210a5c: Link UP Mar 7 01:17:21.583161 systemd-networkd[1381]: cali784d0210a5c: Gained carrier Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:20.532 [ERROR][3957] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:20.653 [INFO][3957] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0 goldmane-cccfbd5cf- calico-system ef7273c6-8ac7-408c-aad0-60960cd76fb7 927 0 2026-03-07 01:17:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-236-123-47 goldmane-cccfbd5cf-mnqxd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali784d0210a5c [] [] }} ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:20.654 [INFO][3957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.006 [INFO][4094] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" HandleID="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.035 [INFO][4094] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" HandleID="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277e80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-123-47", "pod":"goldmane-cccfbd5cf-mnqxd", "timestamp":"2026-03-07 01:17:21.006133141 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000205760)} Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.037 [INFO][4094] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.345 [INFO][4094] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.345 [INFO][4094] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.355 [INFO][4094] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.380 [INFO][4094] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.422 [INFO][4094] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.444 [INFO][4094] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.459 [INFO][4094] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.460 [INFO][4094] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.470 [INFO][4094] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882 Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.478 [INFO][4094] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.498 [INFO][4094] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.198/26] block=192.168.97.192/26 handle="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.498 [INFO][4094] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.198/26] handle="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" host="172-236-123-47" Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.498 [INFO][4094] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:21.635771 containerd[1467]: 2026-03-07 01:17:21.498 [INFO][4094] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.198/26] IPv6=[] ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" HandleID="k8s-pod-network.f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:21.638001 containerd[1467]: 2026-03-07 01:17:21.573 [INFO][3957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ef7273c6-8ac7-408c-aad0-60960cd76fb7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"goldmane-cccfbd5cf-mnqxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali784d0210a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.638001 containerd[1467]: 2026-03-07 01:17:21.573 [INFO][3957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.198/32] ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:21.638001 containerd[1467]: 2026-03-07 01:17:21.573 [INFO][3957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali784d0210a5c ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:21.638001 containerd[1467]: 2026-03-07 01:17:21.584 [INFO][3957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:21.638001 containerd[1467]: 2026-03-07 01:17:21.585 [INFO][3957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ef7273c6-8ac7-408c-aad0-60960cd76fb7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882", Pod:"goldmane-cccfbd5cf-mnqxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali784d0210a5c", MAC:"9e:91:b5:02:14:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.638001 containerd[1467]: 2026-03-07 01:17:21.612 [INFO][3957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882" Namespace="calico-system" Pod="goldmane-cccfbd5cf-mnqxd" WorkloadEndpoint="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:21.675364 systemd[1]: Started cri-containerd-b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe.scope - libcontainer container b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe. Mar 7 01:17:21.690975 systemd-networkd[1381]: cali2aab1de8b65: Link UP Mar 7 01:17:21.695267 systemd-networkd[1381]: cali2aab1de8b65: Gained carrier Mar 7 01:17:21.711915 containerd[1467]: time="2026-03-07T01:17:21.711648712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-nn72l,Uid:7cb646e4-d34c-4c2b-9a6e-cd8ff6644850,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d\"" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:20.519 [ERROR][3982] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:20.654 [INFO][3982] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0 coredns-66bc5c9577- kube-system a5cfa2ca-87cd-4d73-a3e2-864a12def4e1 930 0 2026-03-07 01:16:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-123-47 coredns-66bc5c9577-m5smf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2aab1de8b65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:20.654 [INFO][3982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.034 [INFO][4096] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" HandleID="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.070 [INFO][4096] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" HandleID="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004133f0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-123-47", "pod":"coredns-66bc5c9577-m5smf", "timestamp":"2026-03-07 01:17:21.0342898 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00021e000)} Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.071 [INFO][4096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.498 [INFO][4096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.499 [INFO][4096] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.510 [INFO][4096] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.591 [INFO][4096] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.618 [INFO][4096] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.623 [INFO][4096] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.629 [INFO][4096] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.629 [INFO][4096] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.631 [INFO][4096] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570 Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.640 [INFO][4096] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.656 [INFO][4096] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.199/26] block=192.168.97.192/26 handle="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.659 [INFO][4096] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.199/26] handle="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" host="172-236-123-47" Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.659 [INFO][4096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:21.777061 containerd[1467]: 2026-03-07 01:17:21.659 [INFO][4096] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.199/26] IPv6=[] ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" HandleID="k8s-pod-network.c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:21.777929 containerd[1467]: 2026-03-07 01:17:21.683 [INFO][3982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a5cfa2ca-87cd-4d73-a3e2-864a12def4e1", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"coredns-66bc5c9577-m5smf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aab1de8b65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.777929 containerd[1467]: 2026-03-07 01:17:21.683 [INFO][3982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.199/32] ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:21.777929 containerd[1467]: 2026-03-07 01:17:21.683 [INFO][3982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2aab1de8b65 ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:21.777929 containerd[1467]: 2026-03-07 01:17:21.697 [INFO][3982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:21.777929 containerd[1467]: 2026-03-07 01:17:21.697 [INFO][3982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a5cfa2ca-87cd-4d73-a3e2-864a12def4e1", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570", Pod:"coredns-66bc5c9577-m5smf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aab1de8b65", MAC:"be:bc:2e:df:86:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.777929 containerd[1467]: 2026-03-07 01:17:21.753 [INFO][3982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570" Namespace="kube-system" Pod="coredns-66bc5c9577-m5smf" WorkloadEndpoint="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:21.800996 containerd[1467]: time="2026-03-07T01:17:21.800619704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:21.800996 containerd[1467]: time="2026-03-07T01:17:21.800709878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:21.800996 containerd[1467]: time="2026-03-07T01:17:21.800734152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.800996 containerd[1467]: time="2026-03-07T01:17:21.800849012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.817115 containerd[1467]: time="2026-03-07T01:17:21.814832953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:21.817115 containerd[1467]: time="2026-03-07T01:17:21.814941288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:21.817115 containerd[1467]: time="2026-03-07T01:17:21.814964901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.817745 containerd[1467]: time="2026-03-07T01:17:21.816251606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.864663 systemd-networkd[1381]: cali395e185aa5c: Link UP Mar 7 01:17:21.867748 containerd[1467]: time="2026-03-07T01:17:21.867717642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d8f8f9f-h65cp,Uid:4df59194-badb-495f-a4b0-832c5bd3bb89,Namespace:calico-system,Attempt:1,} returns sandbox id \"8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2\"" Mar 7 01:17:21.875708 systemd-networkd[1381]: cali395e185aa5c: Gained carrier Mar 7 01:17:21.899345 containerd[1467]: time="2026-03-07T01:17:21.895518678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:21.903509 containerd[1467]: time="2026-03-07T01:17:21.899769435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:21.903509 containerd[1467]: time="2026-03-07T01:17:21.899808483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.903509 containerd[1467]: time="2026-03-07T01:17:21.900681013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:21.951405 containerd[1467]: time="2026-03-07T01:17:21.951338935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6vhn,Uid:c20277e9-b30a-4317-a29f-090f637eb98b,Namespace:kube-system,Attempt:1,} returns sandbox id \"b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe\"" Mar 7 01:17:21.953721 systemd[1]: Started cri-containerd-40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f.scope - libcontainer container 40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f. Mar 7 01:17:21.957949 kubelet[2544]: E0307 01:17:21.954362 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.061 [ERROR][4123] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.126 [INFO][4123] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0 whisker-5c88b4c946- calico-system dbd64a92-9c62-476d-9ca1-dc1c5857a76f 953 0 2026-03-07 01:17:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c88b4c946 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-236-123-47 whisker-5c88b4c946-vh79t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali395e185aa5c [] [] }} ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.126 [INFO][4123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.358 [INFO][4202] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" HandleID="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Workload="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.411 [INFO][4202] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" HandleID="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Workload="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-123-47", "pod":"whisker-5c88b4c946-vh79t", "timestamp":"2026-03-07 01:17:21.358416294 +0000 UTC"}, Hostname:"172-236-123-47", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000299b80)} Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.414 [INFO][4202] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.663 [INFO][4202] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.663 [INFO][4202] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-123-47' Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.672 [INFO][4202] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.682 [INFO][4202] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.754 [INFO][4202] ipam/ipam.go 526: Trying affinity for 192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.765 [INFO][4202] ipam/ipam.go 160: Attempting to load block cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.774 [INFO][4202] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.774 [INFO][4202] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.791 [INFO][4202] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594 Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.802 [INFO][4202] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.821 [INFO][4202] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.97.200/26] block=192.168.97.192/26 handle="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.822 [INFO][4202] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.97.200/26] handle="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" host="172-236-123-47" Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.822 [INFO][4202] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:21.980402 containerd[1467]: 2026-03-07 01:17:21.822 [INFO][4202] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.97.200/26] IPv6=[] ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" HandleID="k8s-pod-network.ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Workload="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" Mar 7 01:17:21.987250 containerd[1467]: 2026-03-07 01:17:21.851 [INFO][4123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0", GenerateName:"whisker-5c88b4c946-", Namespace:"calico-system", SelfLink:"", UID:"dbd64a92-9c62-476d-9ca1-dc1c5857a76f", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c88b4c946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"", Pod:"whisker-5c88b4c946-vh79t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali395e185aa5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.987250 containerd[1467]: 2026-03-07 01:17:21.851 [INFO][4123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.200/32] ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" Mar 7 01:17:21.987250 containerd[1467]: 2026-03-07 01:17:21.851 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali395e185aa5c ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" Mar 7 01:17:21.987250 containerd[1467]: 2026-03-07 01:17:21.881 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" Mar 7 01:17:21.987250 containerd[1467]: 2026-03-07 01:17:21.897 [INFO][4123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0", GenerateName:"whisker-5c88b4c946-", Namespace:"calico-system", SelfLink:"", UID:"dbd64a92-9c62-476d-9ca1-dc1c5857a76f", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c88b4c946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594", Pod:"whisker-5c88b4c946-vh79t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali395e185aa5c", MAC:"56:11:50:d4:cd:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:21.987250 containerd[1467]: 2026-03-07 01:17:21.938 [INFO][4123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594" Namespace="calico-system" Pod="whisker-5c88b4c946-vh79t" WorkloadEndpoint="172--236--123--47-k8s-whisker--5c88b4c946--vh79t-eth0" Mar 7 01:17:21.993455 containerd[1467]: time="2026-03-07T01:17:21.993314434Z" level=info msg="CreateContainer within sandbox \"b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:17:22.039232 containerd[1467]: time="2026-03-07T01:17:22.036283288Z" level=info msg="CreateContainer within sandbox \"b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"720bfdc44b7094c4d972fc86458c1cae55e2be3e86a02b078ed751f555a26e88\"" Mar 7 01:17:22.039232 containerd[1467]: time="2026-03-07T01:17:22.037798332Z" level=info msg="StartContainer for \"720bfdc44b7094c4d972fc86458c1cae55e2be3e86a02b078ed751f555a26e88\"" Mar 7 01:17:22.045499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254843841.mount: Deactivated successfully. Mar 7 01:17:22.071399 systemd[1]: Started cri-containerd-f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882.scope - libcontainer container f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882. Mar 7 01:17:22.093400 systemd-networkd[1381]: cali59e19621222: Gained IPv6LL Mar 7 01:17:22.096778 systemd[1]: Started cri-containerd-c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570.scope - libcontainer container c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570. Mar 7 01:17:22.188350 systemd[1]: Started cri-containerd-720bfdc44b7094c4d972fc86458c1cae55e2be3e86a02b078ed751f555a26e88.scope - libcontainer container 720bfdc44b7094c4d972fc86458c1cae55e2be3e86a02b078ed751f555a26e88. Mar 7 01:17:22.196527 containerd[1467]: time="2026-03-07T01:17:22.196315621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:22.196527 containerd[1467]: time="2026-03-07T01:17:22.196405218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:22.196527 containerd[1467]: time="2026-03-07T01:17:22.196430658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:22.203237 containerd[1467]: time="2026-03-07T01:17:22.198554365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:22.205619 containerd[1467]: time="2026-03-07T01:17:22.205578195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cd7b6945c-6xktn,Uid:db500040-3011-4390-aa58-2e19f8e5b3b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f\"" Mar 7 01:17:22.270956 systemd[1]: Started cri-containerd-ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594.scope - libcontainer container ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594. Mar 7 01:17:22.336011 containerd[1467]: time="2026-03-07T01:17:22.335946940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m5smf,Uid:a5cfa2ca-87cd-4d73-a3e2-864a12def4e1,Namespace:kube-system,Attempt:1,} returns sandbox id \"c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570\"" Mar 7 01:17:22.342498 kubelet[2544]: E0307 01:17:22.341468 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:22.356659 containerd[1467]: time="2026-03-07T01:17:22.356611886Z" level=info msg="StartContainer for \"720bfdc44b7094c4d972fc86458c1cae55e2be3e86a02b078ed751f555a26e88\" returns successfully" Mar 7 01:17:22.361151 containerd[1467]: time="2026-03-07T01:17:22.360828157Z" level=info msg="CreateContainer within sandbox \"c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:17:22.407704 containerd[1467]: time="2026-03-07T01:17:22.407668669Z" level=info msg="CreateContainer within sandbox \"c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0a90a46f10d13cb366d9ca84f5a3dbaa4873bc5a2fa09c7d30174a9dec396c4\"" Mar 7 01:17:22.409879 containerd[1467]: time="2026-03-07T01:17:22.409695244Z" level=info msg="StartContainer for \"a0a90a46f10d13cb366d9ca84f5a3dbaa4873bc5a2fa09c7d30174a9dec396c4\"" Mar 7 01:17:22.413235 containerd[1467]: time="2026-03-07T01:17:22.413188708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-mnqxd,Uid:ef7273c6-8ac7-408c-aad0-60960cd76fb7,Namespace:calico-system,Attempt:1,} returns sandbox id \"f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882\"" Mar 7 01:17:22.473701 systemd[1]: Started cri-containerd-a0a90a46f10d13cb366d9ca84f5a3dbaa4873bc5a2fa09c7d30174a9dec396c4.scope - libcontainer container a0a90a46f10d13cb366d9ca84f5a3dbaa4873bc5a2fa09c7d30174a9dec396c4. Mar 7 01:17:22.480414 systemd-networkd[1381]: cali8e4edca80f8: Gained IPv6LL Mar 7 01:17:22.578073 containerd[1467]: time="2026-03-07T01:17:22.577760171Z" level=info msg="StartContainer for \"a0a90a46f10d13cb366d9ca84f5a3dbaa4873bc5a2fa09c7d30174a9dec396c4\" returns successfully" Mar 7 01:17:22.672150 containerd[1467]: time="2026-03-07T01:17:22.671589078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c88b4c946-vh79t,Uid:dbd64a92-9c62-476d-9ca1-dc1c5857a76f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594\"" Mar 7 01:17:22.783272 containerd[1467]: time="2026-03-07T01:17:22.781887221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:22.783919 containerd[1467]: time="2026-03-07T01:17:22.783888816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:17:22.784691 containerd[1467]: time="2026-03-07T01:17:22.784666773Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:22.787615 containerd[1467]: time="2026-03-07T01:17:22.787594499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:22.789041 containerd[1467]: time="2026-03-07T01:17:22.789019615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.222362323s" Mar 7 01:17:22.789364 containerd[1467]: time="2026-03-07T01:17:22.789344698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:17:22.792130 containerd[1467]: time="2026-03-07T01:17:22.792113198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:17:22.795108 containerd[1467]: time="2026-03-07T01:17:22.795062296Z" level=info msg="CreateContainer within sandbox \"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:17:22.799149 systemd-networkd[1381]: cali5dcafb17ad6: Gained IPv6LL Mar 7 01:17:22.807841 containerd[1467]: time="2026-03-07T01:17:22.807736672Z" level=info msg="CreateContainer within sandbox \"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2f106d13b3711231e2147fa3732a805e11c9c2bd463e3ac7df1578f1bdd9b278\"" Mar 7 01:17:22.809238 containerd[1467]: time="2026-03-07T01:17:22.808527655Z" level=info msg="StartContainer for \"2f106d13b3711231e2147fa3732a805e11c9c2bd463e3ac7df1578f1bdd9b278\"" Mar 7 01:17:22.845378 systemd[1]: Started cri-containerd-2f106d13b3711231e2147fa3732a805e11c9c2bd463e3ac7df1578f1bdd9b278.scope - libcontainer container 2f106d13b3711231e2147fa3732a805e11c9c2bd463e3ac7df1578f1bdd9b278. Mar 7 01:17:22.862348 systemd-networkd[1381]: cali16f77c12f55: Gained IPv6LL Mar 7 01:17:22.863155 systemd-networkd[1381]: calib41b8cf5537: Gained IPv6LL Mar 7 01:17:22.884239 containerd[1467]: time="2026-03-07T01:17:22.884185194Z" level=info msg="StartContainer for \"2f106d13b3711231e2147fa3732a805e11c9c2bd463e3ac7df1578f1bdd9b278\" returns successfully" Mar 7 01:17:23.245432 systemd-networkd[1381]: cali2aab1de8b65: Gained IPv6LL Mar 7 01:17:23.373455 systemd-networkd[1381]: cali784d0210a5c: Gained IPv6LL Mar 7 01:17:23.405089 kubelet[2544]: E0307 01:17:23.404681 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:23.411851 kubelet[2544]: E0307 01:17:23.408423 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:23.473309 kubelet[2544]: I0307 01:17:23.473258 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m5smf" podStartSLOduration=28.473241 podStartE2EDuration="28.473241s" podCreationTimestamp="2026-03-07 01:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:23.472869497 +0000 UTC m=+34.582075946" watchObservedRunningTime="2026-03-07 01:17:23.473241 +0000 UTC m=+34.582447449" Mar 7 01:17:23.477280 kubelet[2544]: I0307 01:17:23.476897 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w6vhn" podStartSLOduration=28.476886154 podStartE2EDuration="28.476886154s" podCreationTimestamp="2026-03-07 01:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:23.426239716 +0000 UTC m=+34.535446155" watchObservedRunningTime="2026-03-07 01:17:23.476886154 +0000 UTC m=+34.586092594" Mar 7 01:17:23.566974 systemd-networkd[1381]: cali395e185aa5c: Gained IPv6LL Mar 7 01:17:24.419573 kubelet[2544]: E0307 01:17:24.419130 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:24.421111 kubelet[2544]: E0307 01:17:24.421074 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:24.452334 containerd[1467]: time="2026-03-07T01:17:24.452266572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:17:24.454033 containerd[1467]: time="2026-03-07T01:17:24.453502333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:24.456244 containerd[1467]: time="2026-03-07T01:17:24.456192566Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:24.457377 containerd[1467]: time="2026-03-07T01:17:24.457342647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.665138745s" Mar 7 01:17:24.457480 containerd[1467]: time="2026-03-07T01:17:24.457457671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:17:24.458133 containerd[1467]: time="2026-03-07T01:17:24.458100314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:24.459433 containerd[1467]: time="2026-03-07T01:17:24.459386621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:17:24.462463 containerd[1467]: time="2026-03-07T01:17:24.462433773Z" level=info msg="CreateContainer within sandbox \"b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:17:24.486440 containerd[1467]: time="2026-03-07T01:17:24.486409948Z" level=info msg="CreateContainer within sandbox \"b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d05e9ce4cee83753b7ac7a03f69cc8e204f0081adf850973c872c340116b2f3\"" Mar 7 01:17:24.487459 containerd[1467]: time="2026-03-07T01:17:24.487429402Z" level=info msg="StartContainer for \"5d05e9ce4cee83753b7ac7a03f69cc8e204f0081adf850973c872c340116b2f3\"" Mar 7 01:17:24.547441 systemd[1]: Started cri-containerd-5d05e9ce4cee83753b7ac7a03f69cc8e204f0081adf850973c872c340116b2f3.scope - libcontainer container 5d05e9ce4cee83753b7ac7a03f69cc8e204f0081adf850973c872c340116b2f3. Mar 7 01:17:24.612307 containerd[1467]: time="2026-03-07T01:17:24.612194419Z" level=info msg="StartContainer for \"5d05e9ce4cee83753b7ac7a03f69cc8e204f0081adf850973c872c340116b2f3\" returns successfully" Mar 7 01:17:25.430601 kubelet[2544]: E0307 01:17:25.427409 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:25.434552 kubelet[2544]: E0307 01:17:25.433985 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:25.463269 kubelet[2544]: I0307 01:17:25.463018 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-cd7b6945c-nn72l" podStartSLOduration=15.723805977 podStartE2EDuration="18.463000898s" podCreationTimestamp="2026-03-07 01:17:07 +0000 UTC" firstStartedPulling="2026-03-07 01:17:21.719559683 +0000 UTC m=+32.828766132" lastFinishedPulling="2026-03-07 01:17:24.458754614 +0000 UTC m=+35.567961053" observedRunningTime="2026-03-07 01:17:25.462471547 +0000 UTC m=+36.571678006" watchObservedRunningTime="2026-03-07 01:17:25.463000898 +0000 UTC m=+36.572207347" Mar 7 01:17:26.432001 kubelet[2544]: I0307 01:17:26.431839 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:26.857230 containerd[1467]: time="2026-03-07T01:17:26.855900912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:26.857230 containerd[1467]: time="2026-03-07T01:17:26.857071509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:17:26.857781 containerd[1467]: time="2026-03-07T01:17:26.857754971Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:26.860365 containerd[1467]: time="2026-03-07T01:17:26.860320476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:26.861570 containerd[1467]: time="2026-03-07T01:17:26.861534303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.402107606s" Mar 7 01:17:26.861632 containerd[1467]: time="2026-03-07T01:17:26.861573298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:17:26.863391 containerd[1467]: time="2026-03-07T01:17:26.863351520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:17:26.888560 containerd[1467]: time="2026-03-07T01:17:26.888519584Z" level=info msg="CreateContainer within sandbox \"8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:17:26.902568 containerd[1467]: time="2026-03-07T01:17:26.902529709Z" level=info msg="CreateContainer within sandbox \"8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a35b0c40ea0c6a774dda4050b851fd047c603131fa82ebc0c4f0b22a8b2dea0e\"" Mar 7 01:17:26.904852 containerd[1467]: time="2026-03-07T01:17:26.904271529Z" level=info msg="StartContainer for \"a35b0c40ea0c6a774dda4050b851fd047c603131fa82ebc0c4f0b22a8b2dea0e\"" Mar 7 01:17:26.969401 systemd[1]: Started cri-containerd-a35b0c40ea0c6a774dda4050b851fd047c603131fa82ebc0c4f0b22a8b2dea0e.scope - libcontainer container a35b0c40ea0c6a774dda4050b851fd047c603131fa82ebc0c4f0b22a8b2dea0e. Mar 7 01:17:27.024147 containerd[1467]: time="2026-03-07T01:17:27.024044411Z" level=info msg="StartContainer for \"a35b0c40ea0c6a774dda4050b851fd047c603131fa82ebc0c4f0b22a8b2dea0e\" returns successfully" Mar 7 01:17:27.040461 containerd[1467]: time="2026-03-07T01:17:27.040023596Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:27.042319 containerd[1467]: time="2026-03-07T01:17:27.042273000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:17:27.046083 containerd[1467]: time="2026-03-07T01:17:27.046014045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 182.624522ms" Mar 7 01:17:27.046166 containerd[1467]: time="2026-03-07T01:17:27.046086028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:17:27.048450 containerd[1467]: time="2026-03-07T01:17:27.047438782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:17:27.054159 containerd[1467]: time="2026-03-07T01:17:27.054007497Z" level=info msg="CreateContainer within sandbox \"40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:17:27.070921 containerd[1467]: time="2026-03-07T01:17:27.070875415Z" level=info msg="CreateContainer within sandbox \"40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d24d6bf935a211e1e30cc048868ea5b1230c7e121fda78af696ca5e929507fd8\"" Mar 7 01:17:27.073825 containerd[1467]: time="2026-03-07T01:17:27.072137755Z" level=info msg="StartContainer for \"d24d6bf935a211e1e30cc048868ea5b1230c7e121fda78af696ca5e929507fd8\"" Mar 7 01:17:27.153676 systemd[1]: Started cri-containerd-d24d6bf935a211e1e30cc048868ea5b1230c7e121fda78af696ca5e929507fd8.scope - libcontainer container d24d6bf935a211e1e30cc048868ea5b1230c7e121fda78af696ca5e929507fd8. Mar 7 01:17:27.238690 containerd[1467]: time="2026-03-07T01:17:27.238646968Z" level=info msg="StartContainer for \"d24d6bf935a211e1e30cc048868ea5b1230c7e121fda78af696ca5e929507fd8\" returns successfully" Mar 7 01:17:27.481057 kubelet[2544]: I0307 01:17:27.480449 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-cd7b6945c-6xktn" podStartSLOduration=15.642259445 podStartE2EDuration="20.480432028s" podCreationTimestamp="2026-03-07 01:17:07 +0000 UTC" firstStartedPulling="2026-03-07 01:17:22.208805001 +0000 UTC m=+33.318011440" lastFinishedPulling="2026-03-07 01:17:27.046977584 +0000 UTC m=+38.156184023" observedRunningTime="2026-03-07 01:17:27.46247968 +0000 UTC m=+38.571686129" watchObservedRunningTime="2026-03-07 01:17:27.480432028 +0000 UTC m=+38.589638467" Mar 7 01:17:28.454862 kubelet[2544]: I0307 01:17:28.453622 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:28.455986 kubelet[2544]: I0307 01:17:28.455289 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:29.291070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189955097.mount: Deactivated successfully. Mar 7 01:17:29.478469 kubelet[2544]: I0307 01:17:29.476691 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:29.755015 kubelet[2544]: I0307 01:17:29.754541 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f7d8f8f9f-h65cp" podStartSLOduration=16.770053229 podStartE2EDuration="21.754524124s" podCreationTimestamp="2026-03-07 01:17:08 +0000 UTC" firstStartedPulling="2026-03-07 01:17:21.878753368 +0000 UTC m=+32.987959807" lastFinishedPulling="2026-03-07 01:17:26.863224263 +0000 UTC m=+37.972430702" observedRunningTime="2026-03-07 01:17:27.481376527 +0000 UTC m=+38.590582986" watchObservedRunningTime="2026-03-07 01:17:29.754524124 +0000 UTC m=+40.863730573" Mar 7 01:17:30.033589 containerd[1467]: time="2026-03-07T01:17:30.031537020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:30.035511 containerd[1467]: time="2026-03-07T01:17:30.035413557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:17:30.037234 containerd[1467]: time="2026-03-07T01:17:30.036364773Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:30.042599 containerd[1467]: time="2026-03-07T01:17:30.042567803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:30.047454 containerd[1467]: time="2026-03-07T01:17:30.047420694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.999932636s" Mar 7 01:17:30.049230 containerd[1467]: time="2026-03-07T01:17:30.047545579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:17:30.053424 containerd[1467]: time="2026-03-07T01:17:30.053406576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:17:30.057118 containerd[1467]: time="2026-03-07T01:17:30.057089832Z" level=info msg="CreateContainer within sandbox \"f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:17:30.071748 containerd[1467]: time="2026-03-07T01:17:30.071715930Z" level=info msg="CreateContainer within sandbox \"f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a2bb7b0eddc89617fe6160b2e50d8be0ac2a9793b859e6cf0abfb2a0a95b4ac4\"" Mar 7 01:17:30.074018 containerd[1467]: time="2026-03-07T01:17:30.072455663Z" level=info msg="StartContainer for \"a2bb7b0eddc89617fe6160b2e50d8be0ac2a9793b859e6cf0abfb2a0a95b4ac4\"" Mar 7 01:17:30.158448 systemd[1]: Started cri-containerd-a2bb7b0eddc89617fe6160b2e50d8be0ac2a9793b859e6cf0abfb2a0a95b4ac4.scope - libcontainer container a2bb7b0eddc89617fe6160b2e50d8be0ac2a9793b859e6cf0abfb2a0a95b4ac4. Mar 7 01:17:30.243269 containerd[1467]: time="2026-03-07T01:17:30.242921898Z" level=info msg="StartContainer for \"a2bb7b0eddc89617fe6160b2e50d8be0ac2a9793b859e6cf0abfb2a0a95b4ac4\" returns successfully" Mar 7 01:17:30.493938 kubelet[2544]: I0307 01:17:30.493707 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-mnqxd" podStartSLOduration=15.862259452 podStartE2EDuration="23.493687695s" podCreationTimestamp="2026-03-07 01:17:07 +0000 UTC" firstStartedPulling="2026-03-07 01:17:22.418926867 +0000 UTC m=+33.528133306" lastFinishedPulling="2026-03-07 01:17:30.0503551 +0000 UTC m=+41.159561549" observedRunningTime="2026-03-07 01:17:30.488575327 +0000 UTC m=+41.597781816" watchObservedRunningTime="2026-03-07 01:17:30.493687695 +0000 UTC m=+41.602894134" Mar 7 01:17:30.520501 systemd[1]: run-containerd-runc-k8s.io-a2bb7b0eddc89617fe6160b2e50d8be0ac2a9793b859e6cf0abfb2a0a95b4ac4-runc.C9vPoq.mount: Deactivated successfully. Mar 7 01:17:30.769150 containerd[1467]: time="2026-03-07T01:17:30.768083155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:30.769327 containerd[1467]: time="2026-03-07T01:17:30.769295589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:17:30.769800 containerd[1467]: time="2026-03-07T01:17:30.769772443Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:30.772171 containerd[1467]: time="2026-03-07T01:17:30.772141155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:30.773426 containerd[1467]: time="2026-03-07T01:17:30.773332395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 719.84041ms" Mar 7 01:17:30.773426 containerd[1467]: time="2026-03-07T01:17:30.773383699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:17:30.775419 containerd[1467]: time="2026-03-07T01:17:30.775398080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:17:30.780773 containerd[1467]: time="2026-03-07T01:17:30.780736261Z" level=info msg="CreateContainer within sandbox \"ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:17:30.796110 containerd[1467]: time="2026-03-07T01:17:30.795895052Z" level=info msg="CreateContainer within sandbox \"ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"28bca90c6b8fa53c52691d648d12cf9f269285084fa6b5f72bfa5594eaeb928d\"" Mar 7 01:17:30.799141 containerd[1467]: time="2026-03-07T01:17:30.799102854Z" level=info msg="StartContainer for \"28bca90c6b8fa53c52691d648d12cf9f269285084fa6b5f72bfa5594eaeb928d\"" Mar 7 01:17:30.859872 systemd[1]: Started cri-containerd-28bca90c6b8fa53c52691d648d12cf9f269285084fa6b5f72bfa5594eaeb928d.scope - libcontainer container 28bca90c6b8fa53c52691d648d12cf9f269285084fa6b5f72bfa5594eaeb928d. Mar 7 01:17:30.946358 containerd[1467]: time="2026-03-07T01:17:30.946320756Z" level=info msg="StartContainer for \"28bca90c6b8fa53c52691d648d12cf9f269285084fa6b5f72bfa5594eaeb928d\" returns successfully" Mar 7 01:17:31.575234 containerd[1467]: time="2026-03-07T01:17:31.575101129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:31.576570 containerd[1467]: time="2026-03-07T01:17:31.576235724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:17:31.578421 containerd[1467]: time="2026-03-07T01:17:31.577048808Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:31.580291 containerd[1467]: time="2026-03-07T01:17:31.579047538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:31.580291 containerd[1467]: time="2026-03-07T01:17:31.580070797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 804.067366ms" Mar 7 01:17:31.580291 containerd[1467]: time="2026-03-07T01:17:31.580124579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:17:31.582549 containerd[1467]: time="2026-03-07T01:17:31.582495000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:17:31.587505 containerd[1467]: time="2026-03-07T01:17:31.587459336Z" level=info msg="CreateContainer within sandbox \"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:17:31.602441 containerd[1467]: time="2026-03-07T01:17:31.602396848Z" level=info msg="CreateContainer within sandbox \"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b06d13c766217729ef47701fc4682dbec9b163c7208b7ae1e91c7046d17ba165\"" Mar 7 01:17:31.603431 containerd[1467]: time="2026-03-07T01:17:31.603392571Z" level=info msg="StartContainer for \"b06d13c766217729ef47701fc4682dbec9b163c7208b7ae1e91c7046d17ba165\"" Mar 7 01:17:31.666335 systemd[1]: Started cri-containerd-b06d13c766217729ef47701fc4682dbec9b163c7208b7ae1e91c7046d17ba165.scope - libcontainer container b06d13c766217729ef47701fc4682dbec9b163c7208b7ae1e91c7046d17ba165. Mar 7 01:17:31.715836 containerd[1467]: time="2026-03-07T01:17:31.715716008Z" level=info msg="StartContainer for \"b06d13c766217729ef47701fc4682dbec9b163c7208b7ae1e91c7046d17ba165\" returns successfully" Mar 7 01:17:32.169812 kubelet[2544]: I0307 01:17:32.169669 2544 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:17:32.172663 kubelet[2544]: I0307 01:17:32.171380 2544 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:17:32.510108 kubelet[2544]: I0307 01:17:32.508636 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hjkhq" podStartSLOduration=14.488833136 podStartE2EDuration="24.508619653s" podCreationTimestamp="2026-03-07 01:17:08 +0000 UTC" firstStartedPulling="2026-03-07 01:17:21.561839026 +0000 UTC m=+32.671045475" lastFinishedPulling="2026-03-07 01:17:31.581625553 +0000 UTC m=+42.690831992" observedRunningTime="2026-03-07 01:17:32.508005102 +0000 UTC m=+43.617211561" watchObservedRunningTime="2026-03-07 01:17:32.508619653 +0000 UTC m=+43.617826092" Mar 7 01:17:32.538059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257032916.mount: Deactivated successfully. Mar 7 01:17:32.548078 containerd[1467]: time="2026-03-07T01:17:32.548043531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:32.549055 containerd[1467]: time="2026-03-07T01:17:32.549013227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:17:32.551250 containerd[1467]: time="2026-03-07T01:17:32.549883320Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:32.552075 containerd[1467]: time="2026-03-07T01:17:32.552038343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:32.552965 containerd[1467]: time="2026-03-07T01:17:32.552933650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 970.40312ms" Mar 7 01:17:32.553019 containerd[1467]: time="2026-03-07T01:17:32.552966437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:17:32.557586 containerd[1467]: time="2026-03-07T01:17:32.557367780Z" level=info msg="CreateContainer within sandbox \"ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:17:32.575151 containerd[1467]: time="2026-03-07T01:17:32.575107432Z" level=info msg="CreateContainer within sandbox \"ace4e870b41d6e4019cddae7e78cfaa14a325904ddb90af2aa79f5d7fc9b1594\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c544857f3caeb41e66c14054c35fcc7f59a4d7729083ba5609a342f43e73267f\"" Mar 7 01:17:32.576175 containerd[1467]: time="2026-03-07T01:17:32.576110844Z" level=info msg="StartContainer for \"c544857f3caeb41e66c14054c35fcc7f59a4d7729083ba5609a342f43e73267f\"" Mar 7 01:17:32.622434 systemd[1]: Started cri-containerd-c544857f3caeb41e66c14054c35fcc7f59a4d7729083ba5609a342f43e73267f.scope - libcontainer container c544857f3caeb41e66c14054c35fcc7f59a4d7729083ba5609a342f43e73267f. Mar 7 01:17:32.683748 containerd[1467]: time="2026-03-07T01:17:32.683704624Z" level=info msg="StartContainer for \"c544857f3caeb41e66c14054c35fcc7f59a4d7729083ba5609a342f43e73267f\" returns successfully" Mar 7 01:17:33.495949 kubelet[2544]: I0307 01:17:33.495896 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c88b4c946-vh79t" podStartSLOduration=3.617024335 podStartE2EDuration="13.495883683s" podCreationTimestamp="2026-03-07 01:17:20 +0000 UTC" firstStartedPulling="2026-03-07 01:17:22.675257708 +0000 UTC m=+33.784464147" lastFinishedPulling="2026-03-07 01:17:32.554117036 +0000 UTC m=+43.663323495" observedRunningTime="2026-03-07 01:17:33.49445105 +0000 UTC m=+44.603657499" watchObservedRunningTime="2026-03-07 01:17:33.495883683 +0000 UTC m=+44.605090132" Mar 7 01:17:37.361822 kubelet[2544]: I0307 01:17:37.361542 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:37.362487 kubelet[2544]: E0307 01:17:37.361921 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:37.496926 kubelet[2544]: E0307 01:17:37.496583 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:17:38.385238 kernel: calico-node[5362]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:17:39.076726 systemd-networkd[1381]: vxlan.calico: Link UP Mar 7 01:17:39.076741 systemd-networkd[1381]: vxlan.calico: Gained carrier Mar 7 01:17:40.973744 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Mar 7 01:17:44.261079 kubelet[2544]: I0307 01:17:44.261013 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:49.017808 containerd[1467]: time="2026-03-07T01:17:49.017771245Z" level=info msg="StopPodSandbox for \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\"" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.069 [WARNING][5537] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-csi--node--driver--hjkhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"109d9e3b-df59-4367-b34e-f9e69ac61279", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b", Pod:"csi-node-driver-hjkhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16f77c12f55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.070 [INFO][5537] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.070 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" iface="eth0" netns="" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.070 [INFO][5537] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.070 [INFO][5537] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.097 [INFO][5546] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.098 [INFO][5546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.098 [INFO][5546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.103 [WARNING][5546] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.103 [INFO][5546] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.105 [INFO][5546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.111056 containerd[1467]: 2026-03-07 01:17:49.107 [INFO][5537] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.111503 containerd[1467]: time="2026-03-07T01:17:49.111104927Z" level=info msg="TearDown network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\" successfully" Mar 7 01:17:49.111503 containerd[1467]: time="2026-03-07T01:17:49.111136424Z" level=info msg="StopPodSandbox for \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\" returns successfully" Mar 7 01:17:49.112019 containerd[1467]: time="2026-03-07T01:17:49.111994507Z" level=info msg="RemovePodSandbox for \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\"" Mar 7 01:17:49.112019 containerd[1467]: time="2026-03-07T01:17:49.112023715Z" level=info msg="Forcibly stopping sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\"" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.155 [WARNING][5560] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-csi--node--driver--hjkhq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"109d9e3b-df59-4367-b34e-f9e69ac61279", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"a4884c3741b539153f31eeca947afd5b7676ae19a95e4ac039c64507b069cf9b", Pod:"csi-node-driver-hjkhq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16f77c12f55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.156 [INFO][5560] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.156 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" iface="eth0" netns="" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.156 [INFO][5560] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.156 [INFO][5560] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.183 [INFO][5567] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.183 [INFO][5567] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.183 [INFO][5567] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.191 [WARNING][5567] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.191 [INFO][5567] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" HandleID="k8s-pod-network.36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Workload="172--236--123--47-k8s-csi--node--driver--hjkhq-eth0" Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.193 [INFO][5567] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.197639 containerd[1467]: 2026-03-07 01:17:49.195 [INFO][5560] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb" Mar 7 01:17:49.198268 containerd[1467]: time="2026-03-07T01:17:49.197683188Z" level=info msg="TearDown network for sandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\" successfully" Mar 7 01:17:49.202581 containerd[1467]: time="2026-03-07T01:17:49.202544997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:49.202671 containerd[1467]: time="2026-03-07T01:17:49.202607253Z" level=info msg="RemovePodSandbox \"36d9f0f1f19a222f40c7af0121b49a48d9caba77845934e04ce35e3306b53adb\" returns successfully" Mar 7 01:17:49.203136 containerd[1467]: time="2026-03-07T01:17:49.203116719Z" level=info msg="StopPodSandbox for \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\"" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.239 [WARNING][5581] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" WorkloadEndpoint="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.240 [INFO][5581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.240 [INFO][5581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" iface="eth0" netns="" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.240 [INFO][5581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.240 [INFO][5581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.261 [INFO][5588] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.262 [INFO][5588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.262 [INFO][5588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.267 [WARNING][5588] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.268 [INFO][5588] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.269 [INFO][5588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.274663 containerd[1467]: 2026-03-07 01:17:49.272 [INFO][5581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.274663 containerd[1467]: time="2026-03-07T01:17:49.274478737Z" level=info msg="TearDown network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\" successfully" Mar 7 01:17:49.274663 containerd[1467]: time="2026-03-07T01:17:49.274504013Z" level=info msg="StopPodSandbox for \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\" returns successfully" Mar 7 01:17:49.275973 containerd[1467]: time="2026-03-07T01:17:49.274946853Z" level=info msg="RemovePodSandbox for \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\"" Mar 7 01:17:49.275973 containerd[1467]: time="2026-03-07T01:17:49.274975140Z" level=info msg="Forcibly stopping sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\"" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.311 [WARNING][5603] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" WorkloadEndpoint="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.311 [INFO][5603] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.311 [INFO][5603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" iface="eth0" netns="" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.311 [INFO][5603] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.311 [INFO][5603] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.332 [INFO][5610] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.333 [INFO][5610] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.333 [INFO][5610] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.339 [WARNING][5610] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.339 [INFO][5610] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" HandleID="k8s-pod-network.2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Workload="172--236--123--47-k8s-whisker--7b478cf965--dvnqr-eth0" Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.341 [INFO][5610] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.347233 containerd[1467]: 2026-03-07 01:17:49.343 [INFO][5603] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d" Mar 7 01:17:49.347233 containerd[1467]: time="2026-03-07T01:17:49.346120153Z" level=info msg="TearDown network for sandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\" successfully" Mar 7 01:17:49.351823 containerd[1467]: time="2026-03-07T01:17:49.351774029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:49.351901 containerd[1467]: time="2026-03-07T01:17:49.351834584Z" level=info msg="RemovePodSandbox \"2546781dcc91362468584482a7d35183b4841138d235d37786aa77f703e5765d\" returns successfully" Mar 7 01:17:49.352294 containerd[1467]: time="2026-03-07T01:17:49.352260110Z" level=info msg="StopPodSandbox for \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\"" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.402 [WARNING][5624] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ef7273c6-8ac7-408c-aad0-60960cd76fb7", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882", Pod:"goldmane-cccfbd5cf-mnqxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali784d0210a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.403 [INFO][5624] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.403 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" iface="eth0" netns="" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.403 [INFO][5624] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.403 [INFO][5624] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.428 [INFO][5632] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.428 [INFO][5632] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.428 [INFO][5632] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.434 [WARNING][5632] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.434 [INFO][5632] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.435 [INFO][5632] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.440182 containerd[1467]: 2026-03-07 01:17:49.437 [INFO][5624] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.440835 containerd[1467]: time="2026-03-07T01:17:49.440252893Z" level=info msg="TearDown network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\" successfully" Mar 7 01:17:49.440835 containerd[1467]: time="2026-03-07T01:17:49.440467866Z" level=info msg="StopPodSandbox for \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\" returns successfully" Mar 7 01:17:49.441332 containerd[1467]: time="2026-03-07T01:17:49.441283530Z" level=info msg="RemovePodSandbox for \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\"" Mar 7 01:17:49.441332 containerd[1467]: time="2026-03-07T01:17:49.441315207Z" level=info msg="Forcibly stopping sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\"" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.491 [WARNING][5648] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ef7273c6-8ac7-408c-aad0-60960cd76fb7", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"f81029c3e62b7f13d554385da4282e42128bbf696adf824da0ac017f57dff882", Pod:"goldmane-cccfbd5cf-mnqxd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali784d0210a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.491 [INFO][5648] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.491 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" iface="eth0" netns="" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.491 [INFO][5648] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.491 [INFO][5648] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.518 [INFO][5662] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.518 [INFO][5662] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.518 [INFO][5662] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.527 [WARNING][5662] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.527 [INFO][5662] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" HandleID="k8s-pod-network.b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Workload="172--236--123--47-k8s-goldmane--cccfbd5cf--mnqxd-eth0" Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.530 [INFO][5662] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.542574 containerd[1467]: 2026-03-07 01:17:49.535 [INFO][5648] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6" Mar 7 01:17:49.542574 containerd[1467]: time="2026-03-07T01:17:49.541800867Z" level=info msg="TearDown network for sandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\" successfully" Mar 7 01:17:49.548506 containerd[1467]: time="2026-03-07T01:17:49.548404919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:49.548506 containerd[1467]: time="2026-03-07T01:17:49.548471196Z" level=info msg="RemovePodSandbox \"b5c1ebd9db46fa7f28af2776a3e7341c7f8ae39ea80caa4370f2723197149dc6\" returns successfully" Mar 7 01:17:49.548871 containerd[1467]: time="2026-03-07T01:17:49.548845120Z" level=info msg="StopPodSandbox for \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\"" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.602 [WARNING][5682] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0", GenerateName:"calico-kube-controllers-7f7d8f8f9f-", Namespace:"calico-system", SelfLink:"", UID:"4df59194-badb-495f-a4b0-832c5bd3bb89", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7d8f8f9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2", Pod:"calico-kube-controllers-7f7d8f8f9f-h65cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e4edca80f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.602 [INFO][5682] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.602 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" iface="eth0" netns="" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.602 [INFO][5682] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.602 [INFO][5682] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.623 [INFO][5692] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.623 [INFO][5692] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.624 [INFO][5692] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.631 [WARNING][5692] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.631 [INFO][5692] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.633 [INFO][5692] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.637753 containerd[1467]: 2026-03-07 01:17:49.635 [INFO][5682] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.638401 containerd[1467]: time="2026-03-07T01:17:49.637801692Z" level=info msg="TearDown network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\" successfully" Mar 7 01:17:49.638401 containerd[1467]: time="2026-03-07T01:17:49.637829469Z" level=info msg="StopPodSandbox for \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\" returns successfully" Mar 7 01:17:49.638991 containerd[1467]: time="2026-03-07T01:17:49.638945017Z" level=info msg="RemovePodSandbox for \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\"" Mar 7 01:17:49.639029 containerd[1467]: time="2026-03-07T01:17:49.638999780Z" level=info msg="Forcibly stopping sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\"" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.675 [WARNING][5707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0", GenerateName:"calico-kube-controllers-7f7d8f8f9f-", Namespace:"calico-system", SelfLink:"", UID:"4df59194-badb-495f-a4b0-832c5bd3bb89", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7d8f8f9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"8dfe4a0c96d6c2b45abd875fa5f7bbfe90b149026a60eb2c68836334343773d2", Pod:"calico-kube-controllers-7f7d8f8f9f-h65cp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e4edca80f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.676 [INFO][5707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.676 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" iface="eth0" netns="" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.676 [INFO][5707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.676 [INFO][5707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.700 [INFO][5715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.700 [INFO][5715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.700 [INFO][5715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.707 [WARNING][5715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.707 [INFO][5715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" HandleID="k8s-pod-network.da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Workload="172--236--123--47-k8s-calico--kube--controllers--7f7d8f8f9f--h65cp-eth0" Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.709 [INFO][5715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.713562 containerd[1467]: 2026-03-07 01:17:49.711 [INFO][5707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917" Mar 7 01:17:49.714052 containerd[1467]: time="2026-03-07T01:17:49.714020007Z" level=info msg="TearDown network for sandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\" successfully" Mar 7 01:17:49.719050 containerd[1467]: time="2026-03-07T01:17:49.719010298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:49.719105 containerd[1467]: time="2026-03-07T01:17:49.719086067Z" level=info msg="RemovePodSandbox \"da89b0dce45c5f812f096cda69aeaafb5426e950da775bbda9926402fcf13917\" returns successfully" Mar 7 01:17:49.719547 containerd[1467]: time="2026-03-07T01:17:49.719513343Z" level=info msg="StopPodSandbox for \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\"" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.755 [WARNING][5729] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"db500040-3011-4390-aa58-2e19f8e5b3b6", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f", Pod:"calico-apiserver-cd7b6945c-6xktn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib41b8cf5537", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.755 [INFO][5729] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.755 [INFO][5729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" iface="eth0" netns="" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.755 [INFO][5729] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.755 [INFO][5729] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.790 [INFO][5737] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.790 [INFO][5737] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.790 [INFO][5737] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.798 [WARNING][5737] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.798 [INFO][5737] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.800 [INFO][5737] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.806299 containerd[1467]: 2026-03-07 01:17:49.803 [INFO][5729] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.806299 containerd[1467]: time="2026-03-07T01:17:49.805996511Z" level=info msg="TearDown network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\" successfully" Mar 7 01:17:49.806299 containerd[1467]: time="2026-03-07T01:17:49.806026758Z" level=info msg="StopPodSandbox for \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\" returns successfully" Mar 7 01:17:49.808768 containerd[1467]: time="2026-03-07T01:17:49.807910977Z" level=info msg="RemovePodSandbox for \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\"" Mar 7 01:17:49.808768 containerd[1467]: time="2026-03-07T01:17:49.807944795Z" level=info msg="Forcibly stopping sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\"" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.847 [WARNING][5752] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"db500040-3011-4390-aa58-2e19f8e5b3b6", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"40f8ba3569ff0c23d6a42a0f23558a5958050192ce6875f700950da0ae4dd54f", Pod:"calico-apiserver-cd7b6945c-6xktn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib41b8cf5537", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.848 [INFO][5752] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.848 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" iface="eth0" netns="" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.848 [INFO][5752] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.848 [INFO][5752] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.870 [INFO][5760] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.870 [INFO][5760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.870 [INFO][5760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.877 [WARNING][5760] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.877 [INFO][5760] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" HandleID="k8s-pod-network.c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--6xktn-eth0" Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.879 [INFO][5760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.884099 containerd[1467]: 2026-03-07 01:17:49.881 [INFO][5752] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261" Mar 7 01:17:49.884768 containerd[1467]: time="2026-03-07T01:17:49.884139624Z" level=info msg="TearDown network for sandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\" successfully" Mar 7 01:17:49.888540 containerd[1467]: time="2026-03-07T01:17:49.888509261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:49.888823 containerd[1467]: time="2026-03-07T01:17:49.888776247Z" level=info msg="RemovePodSandbox \"c48c4d765a77bc967c655e909e133d88313b9e94a83755c1f37a14d217826261\" returns successfully" Mar 7 01:17:49.889361 containerd[1467]: time="2026-03-07T01:17:49.889305289Z" level=info msg="StopPodSandbox for \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\"" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.930 [WARNING][5774] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"7cb646e4-d34c-4c2b-9a6e-cd8ff6644850", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d", Pod:"calico-apiserver-cd7b6945c-nn72l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali59e19621222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.931 [INFO][5774] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.931 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" iface="eth0" netns="" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.931 [INFO][5774] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.931 [INFO][5774] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.959 [INFO][5781] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.959 [INFO][5781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.959 [INFO][5781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.967 [WARNING][5781] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.967 [INFO][5781] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.969 [INFO][5781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:49.978840 containerd[1467]: 2026-03-07 01:17:49.975 [INFO][5774] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:49.978840 containerd[1467]: time="2026-03-07T01:17:49.978692018Z" level=info msg="TearDown network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\" successfully" Mar 7 01:17:49.978840 containerd[1467]: time="2026-03-07T01:17:49.978723776Z" level=info msg="StopPodSandbox for \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\" returns successfully" Mar 7 01:17:49.979882 containerd[1467]: time="2026-03-07T01:17:49.979851927Z" level=info msg="RemovePodSandbox for \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\"" Mar 7 01:17:49.979957 containerd[1467]: time="2026-03-07T01:17:49.979891087Z" level=info msg="Forcibly stopping sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\"" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.022 [WARNING][5795] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0", GenerateName:"calico-apiserver-cd7b6945c-", Namespace:"calico-system", SelfLink:"", UID:"7cb646e4-d34c-4c2b-9a6e-cd8ff6644850", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cd7b6945c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"b4c4c02f3a5cda2c6ce7651abf5beb73b74787e6ba512a8159898690aee7344d", Pod:"calico-apiserver-cd7b6945c-nn72l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali59e19621222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.022 [INFO][5795] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.022 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" iface="eth0" netns="" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.022 [INFO][5795] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.022 [INFO][5795] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.048 [INFO][5802] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.048 [INFO][5802] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.048 [INFO][5802] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.055 [WARNING][5802] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.055 [INFO][5802] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" HandleID="k8s-pod-network.cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Workload="172--236--123--47-k8s-calico--apiserver--cd7b6945c--nn72l-eth0" Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.058 [INFO][5802] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:50.064341 containerd[1467]: 2026-03-07 01:17:50.061 [INFO][5795] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383" Mar 7 01:17:50.064341 containerd[1467]: time="2026-03-07T01:17:50.064085552Z" level=info msg="TearDown network for sandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\" successfully" Mar 7 01:17:50.070013 containerd[1467]: time="2026-03-07T01:17:50.069978630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:50.070080 containerd[1467]: time="2026-03-07T01:17:50.070045506Z" level=info msg="RemovePodSandbox \"cce4a2e4ed95083bb1338cddf56bfada89dcb7c6ea2edc5b5c93e7f8f1353383\" returns successfully" Mar 7 01:17:50.070639 containerd[1467]: time="2026-03-07T01:17:50.070612470Z" level=info msg="StopPodSandbox for \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\"" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.111 [WARNING][5816] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c20277e9-b30a-4317-a29f-090f637eb98b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe", Pod:"coredns-66bc5c9577-w6vhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dcafb17ad6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.111 [INFO][5816] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.111 [INFO][5816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" iface="eth0" netns="" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.111 [INFO][5816] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.111 [INFO][5816] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.139 [INFO][5823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.139 [INFO][5823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.139 [INFO][5823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.145 [WARNING][5823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.145 [INFO][5823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.146 [INFO][5823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:50.152601 containerd[1467]: 2026-03-07 01:17:50.149 [INFO][5816] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.153387 containerd[1467]: time="2026-03-07T01:17:50.152832844Z" level=info msg="TearDown network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\" successfully" Mar 7 01:17:50.153387 containerd[1467]: time="2026-03-07T01:17:50.152856370Z" level=info msg="StopPodSandbox for \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\" returns successfully" Mar 7 01:17:50.153732 containerd[1467]: time="2026-03-07T01:17:50.153697640Z" level=info msg="RemovePodSandbox for \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\"" Mar 7 01:17:50.153732 containerd[1467]: time="2026-03-07T01:17:50.153726827Z" level=info msg="Forcibly stopping sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\"" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.194 [WARNING][5837] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c20277e9-b30a-4317-a29f-090f637eb98b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"b454e2ce0dd44c2d0cbc5049b334354237d27ad503c32160e8ffdfe536cc0afe", Pod:"coredns-66bc5c9577-w6vhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dcafb17ad6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.194 [INFO][5837] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.194 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" iface="eth0" netns="" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.194 [INFO][5837] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.194 [INFO][5837] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.229 [INFO][5844] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.230 [INFO][5844] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.230 [INFO][5844] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.237 [WARNING][5844] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.237 [INFO][5844] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" HandleID="k8s-pod-network.371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Workload="172--236--123--47-k8s-coredns--66bc5c9577--w6vhn-eth0" Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.239 [INFO][5844] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:50.245035 containerd[1467]: 2026-03-07 01:17:50.242 [INFO][5837] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b" Mar 7 01:17:50.245501 containerd[1467]: time="2026-03-07T01:17:50.245064192Z" level=info msg="TearDown network for sandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\" successfully" Mar 7 01:17:50.249424 containerd[1467]: time="2026-03-07T01:17:50.249336656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:50.249424 containerd[1467]: time="2026-03-07T01:17:50.249416986Z" level=info msg="RemovePodSandbox \"371b2bf9b6af4c065bdff86fa399e21c08d721d71e2afab0a2f1f62c7018d54b\" returns successfully" Mar 7 01:17:50.250191 containerd[1467]: time="2026-03-07T01:17:50.250161732Z" level=info msg="StopPodSandbox for \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\"" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.297 [WARNING][5858] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a5cfa2ca-87cd-4d73-a3e2-864a12def4e1", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570", Pod:"coredns-66bc5c9577-m5smf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aab1de8b65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.297 [INFO][5858] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.297 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" iface="eth0" netns="" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.297 [INFO][5858] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.297 [INFO][5858] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.335 [INFO][5865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.335 [INFO][5865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.335 [INFO][5865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.341 [WARNING][5865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.342 [INFO][5865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.343 [INFO][5865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:50.351348 containerd[1467]: 2026-03-07 01:17:50.346 [INFO][5858] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.351348 containerd[1467]: time="2026-03-07T01:17:50.350818739Z" level=info msg="TearDown network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\" successfully" Mar 7 01:17:50.351348 containerd[1467]: time="2026-03-07T01:17:50.350851647Z" level=info msg="StopPodSandbox for \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\" returns successfully" Mar 7 01:17:50.357820 containerd[1467]: time="2026-03-07T01:17:50.355045372Z" level=info msg="RemovePodSandbox for \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\"" Mar 7 01:17:50.357820 containerd[1467]: time="2026-03-07T01:17:50.355086761Z" level=info msg="Forcibly stopping sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\"" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.409 [WARNING][5897] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a5cfa2ca-87cd-4d73-a3e2-864a12def4e1", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-123-47", ContainerID:"c837ff0997a7ec7d0a72d4efd8ae831ac64b8497993b2c9f5495bc9a37e9c570", Pod:"coredns-66bc5c9577-m5smf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2aab1de8b65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.410 [INFO][5897] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.410 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" iface="eth0" netns="" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.410 [INFO][5897] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.410 [INFO][5897] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.441 [INFO][5904] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.441 [INFO][5904] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.441 [INFO][5904] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.446 [WARNING][5904] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.446 [INFO][5904] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" HandleID="k8s-pod-network.5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Workload="172--236--123--47-k8s-coredns--66bc5c9577--m5smf-eth0" Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.448 [INFO][5904] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:50.453513 containerd[1467]: 2026-03-07 01:17:50.450 [INFO][5897] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c" Mar 7 01:17:50.454004 containerd[1467]: time="2026-03-07T01:17:50.453618705Z" level=info msg="TearDown network for sandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\" successfully" Mar 7 01:17:50.464057 containerd[1467]: time="2026-03-07T01:17:50.463310844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:50.464057 containerd[1467]: time="2026-03-07T01:17:50.463377899Z" level=info msg="RemovePodSandbox \"5dfd77a1befc7164c8d642805fbcde3f6bd8078c196465cdbf2046bd7bf6fb4c\" returns successfully" Mar 7 01:17:58.147400 kubelet[2544]: I0307 01:17:58.146744 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:18:04.031090 kubelet[2544]: E0307 01:18:04.031016 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:18:09.032023 kubelet[2544]: E0307 01:18:09.031117 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:18:14.357974 systemd[1]: run-containerd-runc-k8s.io-a35b0c40ea0c6a774dda4050b851fd047c603131fa82ebc0c4f0b22a8b2dea0e-runc.YMnqnm.mount: Deactivated successfully. Mar 7 01:18:27.031148 kubelet[2544]: E0307 01:18:27.030379 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:18:28.029682 kubelet[2544]: E0307 01:18:28.029576 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:18:31.487830 systemd[1]: Started sshd@7-172.236.123.47:22-68.220.241.50:43810.service - OpenSSH per-connection server daemon (68.220.241.50:43810). Mar 7 01:18:31.688720 sshd[6061]: Accepted publickey for core from 68.220.241.50 port 43810 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:31.692886 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:31.702070 systemd-logind[1444]: New session 8 of user core. Mar 7 01:18:31.706460 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:18:31.918889 sshd[6061]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:31.923621 systemd[1]: sshd@7-172.236.123.47:22-68.220.241.50:43810.service: Deactivated successfully. Mar 7 01:18:31.926309 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:18:31.927713 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:18:31.929190 systemd-logind[1444]: Removed session 8. Mar 7 01:18:33.031530 kubelet[2544]: E0307 01:18:33.030345 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:18:36.948273 systemd[1]: Started sshd@8-172.236.123.47:22-68.220.241.50:37924.service - OpenSSH per-connection server daemon (68.220.241.50:37924). Mar 7 01:18:37.102233 sshd[6095]: Accepted publickey for core from 68.220.241.50 port 37924 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:37.103299 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:37.108276 systemd-logind[1444]: New session 9 of user core. Mar 7 01:18:37.115355 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:18:37.306442 sshd[6095]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:37.312222 systemd[1]: sshd@8-172.236.123.47:22-68.220.241.50:37924.service: Deactivated successfully. Mar 7 01:18:37.318092 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:18:37.320950 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:18:37.322061 systemd-logind[1444]: Removed session 9. Mar 7 01:18:42.349772 systemd[1]: Started sshd@9-172.236.123.47:22-68.220.241.50:51826.service - OpenSSH per-connection server daemon (68.220.241.50:51826). Mar 7 01:18:42.496167 sshd[6109]: Accepted publickey for core from 68.220.241.50 port 51826 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:42.498754 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:42.506352 systemd-logind[1444]: New session 10 of user core. Mar 7 01:18:42.520345 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:18:42.709792 sshd[6109]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:42.713704 systemd[1]: sshd@9-172.236.123.47:22-68.220.241.50:51826.service: Deactivated successfully. Mar 7 01:18:42.715937 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:18:42.717789 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:18:42.720142 systemd-logind[1444]: Removed session 10. Mar 7 01:18:47.757053 systemd[1]: Started sshd@10-172.236.123.47:22-68.220.241.50:51830.service - OpenSSH per-connection server daemon (68.220.241.50:51830). Mar 7 01:18:47.939267 sshd[6158]: Accepted publickey for core from 68.220.241.50 port 51830 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:47.942754 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:47.949286 systemd-logind[1444]: New session 11 of user core. Mar 7 01:18:47.953387 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:18:48.148842 sshd[6158]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:48.154923 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:18:48.156297 systemd[1]: sshd@10-172.236.123.47:22-68.220.241.50:51830.service: Deactivated successfully. Mar 7 01:18:48.158532 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:18:48.159884 systemd-logind[1444]: Removed session 11. Mar 7 01:18:48.183380 systemd[1]: Started sshd@11-172.236.123.47:22-68.220.241.50:51834.service - OpenSSH per-connection server daemon (68.220.241.50:51834). Mar 7 01:18:48.339244 sshd[6172]: Accepted publickey for core from 68.220.241.50 port 51834 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:48.341100 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:48.348269 systemd-logind[1444]: New session 12 of user core. Mar 7 01:18:48.355363 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:18:48.568191 sshd[6172]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:48.573636 systemd[1]: sshd@11-172.236.123.47:22-68.220.241.50:51834.service: Deactivated successfully. Mar 7 01:18:48.576462 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:18:48.578382 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:18:48.579488 systemd-logind[1444]: Removed session 12. Mar 7 01:18:48.601090 systemd[1]: Started sshd@12-172.236.123.47:22-68.220.241.50:51846.service - OpenSSH per-connection server daemon (68.220.241.50:51846). Mar 7 01:18:48.752379 sshd[6183]: Accepted publickey for core from 68.220.241.50 port 51846 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:48.754994 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:48.763621 systemd-logind[1444]: New session 13 of user core. Mar 7 01:18:48.765380 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:18:48.953828 sshd[6183]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:48.960878 systemd[1]: sshd@12-172.236.123.47:22-68.220.241.50:51846.service: Deactivated successfully. Mar 7 01:18:48.963979 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:18:48.965018 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:18:48.966747 systemd-logind[1444]: Removed session 13. Mar 7 01:18:50.333340 systemd[1]: run-containerd-runc-k8s.io-c3bd3d8d12f86da845c78ee2fdd4e93e77f347a1a6813af725dd70f89d797b25-runc.n2Z5R6.mount: Deactivated successfully. Mar 7 01:18:53.033443 kubelet[2544]: E0307 01:18:53.033379 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:18:53.989448 systemd[1]: Started sshd@13-172.236.123.47:22-68.220.241.50:42972.service - OpenSSH per-connection server daemon (68.220.241.50:42972). Mar 7 01:18:54.136248 sshd[6222]: Accepted publickey for core from 68.220.241.50 port 42972 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:54.138422 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:54.144845 systemd-logind[1444]: New session 14 of user core. Mar 7 01:18:54.151454 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:18:54.337365 sshd[6222]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:54.342188 systemd[1]: sshd@13-172.236.123.47:22-68.220.241.50:42972.service: Deactivated successfully. Mar 7 01:18:54.345761 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:18:54.346946 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:18:54.349527 systemd-logind[1444]: Removed session 14. Mar 7 01:18:54.375549 systemd[1]: Started sshd@14-172.236.123.47:22-68.220.241.50:42984.service - OpenSSH per-connection server daemon (68.220.241.50:42984). Mar 7 01:18:54.528268 sshd[6235]: Accepted publickey for core from 68.220.241.50 port 42984 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:54.529254 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:54.536292 systemd-logind[1444]: New session 15 of user core. Mar 7 01:18:54.541337 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:18:54.920819 sshd[6235]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:54.924934 systemd[1]: sshd@14-172.236.123.47:22-68.220.241.50:42984.service: Deactivated successfully. Mar 7 01:18:54.928028 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:18:54.929968 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:18:54.931670 systemd-logind[1444]: Removed session 15. Mar 7 01:18:54.955520 systemd[1]: Started sshd@15-172.236.123.47:22-68.220.241.50:42998.service - OpenSSH per-connection server daemon (68.220.241.50:42998). Mar 7 01:18:55.177156 sshd[6246]: Accepted publickey for core from 68.220.241.50 port 42998 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:55.179461 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:55.185230 systemd-logind[1444]: New session 16 of user core. Mar 7 01:18:55.188350 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:18:55.923378 sshd[6246]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:55.931127 systemd[1]: sshd@15-172.236.123.47:22-68.220.241.50:42998.service: Deactivated successfully. Mar 7 01:18:55.931587 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:18:55.938041 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:18:55.956478 systemd-logind[1444]: Removed session 16. Mar 7 01:18:55.968317 systemd[1]: Started sshd@16-172.236.123.47:22-68.220.241.50:43014.service - OpenSSH per-connection server daemon (68.220.241.50:43014). Mar 7 01:18:56.120765 sshd[6263]: Accepted publickey for core from 68.220.241.50 port 43014 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:56.123286 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:56.128945 systemd-logind[1444]: New session 17 of user core. Mar 7 01:18:56.134593 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:18:56.466513 sshd[6263]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:56.471495 systemd[1]: sshd@16-172.236.123.47:22-68.220.241.50:43014.service: Deactivated successfully. Mar 7 01:18:56.474679 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:18:56.478044 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:18:56.480402 systemd-logind[1444]: Removed session 17. Mar 7 01:18:56.506796 systemd[1]: Started sshd@17-172.236.123.47:22-68.220.241.50:43030.service - OpenSSH per-connection server daemon (68.220.241.50:43030). Mar 7 01:18:56.666445 sshd[6283]: Accepted publickey for core from 68.220.241.50 port 43030 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:18:56.668181 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:56.675793 systemd-logind[1444]: New session 18 of user core. Mar 7 01:18:56.683383 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:18:56.875269 sshd[6283]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:56.882917 systemd[1]: sshd@17-172.236.123.47:22-68.220.241.50:43030.service: Deactivated successfully. Mar 7 01:18:56.890731 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:18:56.891485 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:18:56.892511 systemd-logind[1444]: Removed session 18. Mar 7 01:19:00.030326 kubelet[2544]: E0307 01:19:00.030285 2544 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Mar 7 01:19:01.916981 systemd[1]: Started sshd@18-172.236.123.47:22-68.220.241.50:43034.service - OpenSSH per-connection server daemon (68.220.241.50:43034). Mar 7 01:19:02.069158 sshd[6337]: Accepted publickey for core from 68.220.241.50 port 43034 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:19:02.070295 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:02.077531 systemd-logind[1444]: New session 19 of user core. Mar 7 01:19:02.085444 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:19:02.272134 sshd[6337]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:02.279686 systemd[1]: sshd@18-172.236.123.47:22-68.220.241.50:43034.service: Deactivated successfully. Mar 7 01:19:02.282236 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:19:02.283925 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:19:02.285658 systemd-logind[1444]: Removed session 19. Mar 7 01:19:07.316221 systemd[1]: Started sshd@19-172.236.123.47:22-68.220.241.50:55242.service - OpenSSH per-connection server daemon (68.220.241.50:55242). Mar 7 01:19:07.464236 sshd[6350]: Accepted publickey for core from 68.220.241.50 port 55242 ssh2: RSA SHA256:VD7nd6trGWzdZ9116KtsnZ/dmgOuCTn+rZ8eI6NU1x8 Mar 7 01:19:07.465944 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:07.471491 systemd-logind[1444]: New session 20 of user core. Mar 7 01:19:07.476346 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:19:07.674617 sshd[6350]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:07.680487 systemd[1]: sshd@19-172.236.123.47:22-68.220.241.50:55242.service: Deactivated successfully. Mar 7 01:19:07.683450 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:19:07.685257 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:19:07.687067 systemd-logind[1444]: Removed session 20.