Aug 13 07:17:40.916037 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:17:40.916062 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:40.916073 kernel: BIOS-provided physical RAM map: Aug 13 07:17:40.916079 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:17:40.916086 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 07:17:40.916092 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 07:17:40.916099 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 07:17:40.916106 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 07:17:40.916112 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 07:17:40.916118 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 07:17:40.916128 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 07:17:40.916134 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 13 07:17:40.916143 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 13 07:17:40.916150 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 13 07:17:40.916161 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 07:17:40.916168 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 07:17:40.916178 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 07:17:40.916184 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 07:17:40.916191 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 07:17:40.916198 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:17:40.916205 kernel: NX (Execute Disable) protection: active Aug 13 07:17:40.916211 kernel: APIC: Static calls initialized Aug 13 07:17:40.916218 kernel: efi: EFI v2.7 by EDK II Aug 13 07:17:40.916225 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Aug 13 07:17:40.916233 kernel: SMBIOS 2.8 present. Aug 13 07:17:40.916241 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 07:17:40.916259 kernel: Hypervisor detected: KVM Aug 13 07:17:40.916272 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:17:40.916281 kernel: kvm-clock: using sched offset of 5038195898 cycles Aug 13 07:17:40.916290 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:17:40.916300 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:17:40.916309 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:17:40.916317 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:17:40.916324 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 07:17:40.916332 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:17:40.916339 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:17:40.916349 kernel: Using GB pages for direct mapping Aug 13 07:17:40.916355 kernel: Secure boot disabled Aug 13 07:17:40.916362 kernel: ACPI: Early table checksum verification disabled Aug 13 07:17:40.916370 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 07:17:40.916381 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 07:17:40.916389 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:17:40.916396 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:17:40.916406 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 07:17:40.916414 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:17:40.916424 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:17:40.916431 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:17:40.916439 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:17:40.916446 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 07:17:40.916454 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 07:17:40.916464 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 07:17:40.916471 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 07:17:40.916479 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 07:17:40.916486 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 07:17:40.916493 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 07:17:40.916500 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 07:17:40.916508 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 07:17:40.916515 kernel: No NUMA configuration found Aug 13 07:17:40.916525 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 07:17:40.916535 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 07:17:40.916542 kernel: Zone ranges: Aug 13 07:17:40.916550 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:17:40.916558 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 07:17:40.916565 kernel: Normal empty Aug 13 07:17:40.916572 kernel: Movable zone start for each node Aug 13 07:17:40.916580 kernel: Early memory node ranges Aug 13 07:17:40.916587 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:17:40.916595 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 07:17:40.916602 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 07:17:40.916612 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 07:17:40.916619 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 07:17:40.916626 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 07:17:40.916636 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 07:17:40.916644 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:17:40.916651 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:17:40.916658 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 07:17:40.916666 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:17:40.916673 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 07:17:40.916683 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 07:17:40.916691 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 07:17:40.916698 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:17:40.916706 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:17:40.916713 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:17:40.916720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:17:40.916728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:17:40.916735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:17:40.916742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:17:40.916752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:17:40.916759 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:17:40.916767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:17:40.916774 kernel: TSC deadline timer available Aug 13 07:17:40.916781 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:17:40.916789 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:17:40.916796 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:17:40.916803 kernel: kvm-guest: setup PV sched yield Aug 13 07:17:40.916836 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:17:40.916847 kernel: Booting paravirtualized kernel on KVM Aug 13 07:17:40.916855 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:17:40.916863 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:17:40.916870 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:17:40.916877 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:17:40.916885 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:17:40.916892 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:17:40.916899 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:17:40.916908 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:40.916921 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:17:40.916928 kernel: random: crng init done Aug 13 07:17:40.916935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:17:40.916943 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:17:40.916950 kernel: Fallback order for Node 0: 0 Aug 13 07:17:40.916958 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 07:17:40.916965 kernel: Policy zone: DMA32 Aug 13 07:17:40.916972 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:17:40.916982 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 171124K reserved, 0K cma-reserved) Aug 13 07:17:40.916990 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:17:40.916997 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:17:40.917004 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:17:40.917012 kernel: Dynamic Preempt: voluntary Aug 13 07:17:40.917028 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:17:40.917038 kernel: rcu: RCU event tracing is enabled. Aug 13 07:17:40.917046 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:17:40.917054 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:17:40.917062 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:17:40.917070 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:17:40.917077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:17:40.917088 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:17:40.917095 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:17:40.917106 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:17:40.917114 kernel: Console: colour dummy device 80x25 Aug 13 07:17:40.917121 kernel: printk: console [ttyS0] enabled Aug 13 07:17:40.917131 kernel: ACPI: Core revision 20230628 Aug 13 07:17:40.917140 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:17:40.917148 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:17:40.917155 kernel: x2apic enabled Aug 13 07:17:40.917163 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:17:40.917171 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:17:40.917178 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:17:40.917186 kernel: kvm-guest: setup PV IPIs Aug 13 07:17:40.917194 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:17:40.917204 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:17:40.917211 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:17:40.917219 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:17:40.917227 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:17:40.917234 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:17:40.917242 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:17:40.917260 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:17:40.917270 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:17:40.917280 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:17:40.917293 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:17:40.917304 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:17:40.917314 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:17:40.917324 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:17:40.917338 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:17:40.917348 kernel: x86/bugs: return thunk changed Aug 13 07:17:40.917358 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:17:40.917367 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:17:40.917378 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:17:40.917386 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:17:40.917393 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:17:40.917401 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:17:40.917409 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:17:40.917416 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:17:40.917424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:17:40.917432 kernel: landlock: Up and running. Aug 13 07:17:40.917440 kernel: SELinux: Initializing. Aug 13 07:17:40.917450 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:17:40.917457 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:17:40.917465 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:17:40.917473 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:17:40.917481 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:17:40.917489 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:17:40.917497 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:17:40.917504 kernel: ... version: 0 Aug 13 07:17:40.917512 kernel: ... bit width: 48 Aug 13 07:17:40.917522 kernel: ... generic registers: 6 Aug 13 07:17:40.917529 kernel: ... value mask: 0000ffffffffffff Aug 13 07:17:40.917537 kernel: ... max period: 00007fffffffffff Aug 13 07:17:40.917544 kernel: ... fixed-purpose events: 0 Aug 13 07:17:40.917552 kernel: ... event mask: 000000000000003f Aug 13 07:17:40.917560 kernel: signal: max sigframe size: 1776 Aug 13 07:17:40.917567 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:17:40.917575 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:17:40.917583 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:17:40.917593 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:17:40.917600 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:17:40.917608 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:17:40.917616 kernel: smpboot: Max logical packages: 1 Aug 13 07:17:40.917623 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:17:40.917631 kernel: devtmpfs: initialized Aug 13 07:17:40.917639 kernel: x86/mm: Memory block size: 128MB Aug 13 07:17:40.917646 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 07:17:40.917654 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 07:17:40.917664 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 07:17:40.917672 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 07:17:40.917679 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 07:17:40.917687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:17:40.917695 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:17:40.917702 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:17:40.917710 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:17:40.917718 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:17:40.917726 kernel: audit: type=2000 audit(1755069459.291:1): state=initialized audit_enabled=0 res=1 Aug 13 07:17:40.917736 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:17:40.917743 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:17:40.917751 kernel: cpuidle: using governor menu Aug 13 07:17:40.917758 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:17:40.917766 kernel: dca service started, version 1.12.1 Aug 13 07:17:40.917774 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:17:40.917781 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:17:40.917789 kernel: PCI: Using configuration type 1 for base access Aug 13 07:17:40.917796 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:17:40.917806 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:17:40.917827 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:17:40.917835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:17:40.917842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:17:40.917850 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:17:40.917857 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:17:40.917865 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:17:40.917872 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:17:40.917880 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:17:40.917891 kernel: ACPI: Interpreter enabled Aug 13 07:17:40.917898 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:17:40.917906 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:17:40.917914 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:17:40.917921 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:17:40.917929 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:17:40.917936 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:17:40.918164 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:17:40.918325 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:17:40.918457 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:17:40.918468 kernel: PCI host bridge to bus 0000:00 Aug 13 07:17:40.918611 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:17:40.918730 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:17:40.918865 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:17:40.918983 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:17:40.919104 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:40.919219 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 07:17:40.919360 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:17:40.919522 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:17:40.919671 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:17:40.919968 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 07:17:40.921545 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 07:17:40.921870 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 07:17:40.922044 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 07:17:40.922227 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:17:40.922717 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:17:40.923066 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 07:17:40.924783 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 07:17:40.924962 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 07:17:40.925199 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:17:40.925412 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 07:17:40.925664 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 07:17:40.925871 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 07:17:40.926282 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:17:40.926574 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 07:17:40.926949 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 07:17:40.927359 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 07:17:40.927835 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 07:17:40.928404 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:17:40.929127 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:17:40.929582 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:17:40.930052 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 07:17:40.930280 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 07:17:40.930504 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:17:40.930878 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 07:17:40.930907 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:17:40.930916 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:17:40.930924 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:17:40.930932 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:17:40.930965 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:17:40.931150 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:17:40.931167 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:17:40.931191 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:17:40.931212 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:17:40.931233 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:17:40.931270 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:17:40.931303 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:17:40.931330 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:17:40.931343 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:17:40.931351 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:17:40.931359 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:17:40.931367 kernel: iommu: Default domain type: Translated Aug 13 07:17:40.931398 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:17:40.931426 kernel: efivars: Registered efivars operations Aug 13 07:17:40.931450 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:17:40.931476 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:17:40.931500 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 07:17:40.931525 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 07:17:40.931556 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 07:17:40.931579 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 07:17:40.932276 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:17:40.933764 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:17:40.934287 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:17:40.934330 kernel: vgaarb: loaded Aug 13 07:17:40.934360 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:17:40.934389 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:17:40.934431 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:17:40.934456 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:17:40.934475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:17:40.934494 kernel: pnp: PnP ACPI init Aug 13 07:17:40.935121 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:17:40.935146 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:17:40.935156 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:17:40.935179 kernel: NET: Registered PF_INET protocol family Aug 13 07:17:40.935198 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:17:40.935217 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:17:40.935226 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:17:40.935236 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:17:40.935264 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:17:40.935297 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:17:40.935322 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:17:40.935348 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:17:40.935360 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:17:40.935386 kernel: NET: Registered PF_XDP protocol family Aug 13 07:17:40.936082 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 07:17:40.936273 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 07:17:40.936501 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:17:40.936709 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:17:40.937139 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:17:40.937606 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:17:40.938055 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:17:40.938520 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 07:17:40.938554 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:17:40.938581 kernel: Initialise system trusted keyrings Aug 13 07:17:40.938603 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:17:40.938627 kernel: Key type asymmetric registered Aug 13 07:17:40.938651 kernel: Asymmetric key parser 'x509' registered Aug 13 07:17:40.938678 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:17:40.938704 kernel: io scheduler mq-deadline registered Aug 13 07:17:40.938728 kernel: io scheduler kyber registered Aug 13 07:17:40.938766 kernel: io scheduler bfq registered Aug 13 07:17:40.938790 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:17:40.938946 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:17:40.938976 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:17:40.939003 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:17:40.939027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:17:40.939053 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:17:40.939080 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:17:40.939109 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:17:40.939148 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:17:40.939893 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:17:40.940436 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:17:40.940473 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:17:40.940930 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:17:40 UTC (1755069460) Aug 13 07:17:40.941332 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:17:40.941363 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:17:40.941377 kernel: efifb: probing for efifb Aug 13 07:17:40.941407 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 13 07:17:40.941425 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 13 07:17:40.941444 kernel: efifb: scrolling: redraw Aug 13 07:17:40.941463 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 13 07:17:40.941479 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 07:17:40.941496 kernel: fb0: EFI VGA frame buffer device Aug 13 07:17:40.941549 kernel: pstore: Using crash dump compression: deflate Aug 13 07:17:40.941570 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:17:40.941584 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:17:40.941605 kernel: Segment Routing with IPv6 Aug 13 07:17:40.941625 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:17:40.941646 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:17:40.941660 kernel: Key type dns_resolver registered Aug 13 07:17:40.941679 kernel: IPI shorthand broadcast: enabled Aug 13 07:17:40.941700 kernel: sched_clock: Marking stable (984002998, 112256243)->(1115300009, -19040768) Aug 13 07:17:40.941716 kernel: registered taskstats version 1 Aug 13 07:17:40.941737 kernel: Loading compiled-in X.509 certificates Aug 13 07:17:40.941754 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:17:40.941777 kernel: Key type .fscrypt registered Aug 13 07:17:40.941797 kernel: Key type fscrypt-provisioning registered Aug 13 07:17:40.941837 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:17:40.941860 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:17:40.941884 kernel: ima: No architecture policies found Aug 13 07:17:40.941908 kernel: clk: Disabling unused clocks Aug 13 07:17:40.941933 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:17:40.941957 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:17:40.941984 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:17:40.942003 kernel: Run /init as init process Aug 13 07:17:40.942025 kernel: with arguments: Aug 13 07:17:40.942035 kernel: /init Aug 13 07:17:40.942051 kernel: with environment: Aug 13 07:17:40.942073 kernel: HOME=/ Aug 13 07:17:40.942094 kernel: TERM=linux Aug 13 07:17:40.942119 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:17:40.942145 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:17:40.942184 systemd[1]: Detected virtualization kvm. Aug 13 07:17:40.942207 systemd[1]: Detected architecture x86-64. Aug 13 07:17:40.942228 systemd[1]: Running in initrd. Aug 13 07:17:40.942265 systemd[1]: No hostname configured, using default hostname. Aug 13 07:17:40.942307 systemd[1]: Hostname set to . Aug 13 07:17:40.942354 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:17:40.942374 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:17:40.942384 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:40.942397 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:40.942420 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:17:40.942442 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:17:40.942465 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:17:40.942497 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:17:40.942524 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:17:40.942544 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:17:40.942564 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:40.942587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:40.942604 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:17:40.942613 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:17:40.942625 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:17:40.942634 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:17:40.942642 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:40.942651 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:40.942659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:17:40.942668 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:17:40.942693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:40.942720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:40.942743 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:40.942771 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:17:40.942792 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:17:40.942878 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:17:40.942907 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:17:40.942930 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:17:40.942955 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:17:40.942976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:17:40.942991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:40.943020 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:40.943040 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:40.943059 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:17:40.943079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:17:40.943104 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:17:40.943185 systemd-journald[193]: Collecting audit messages is disabled. Aug 13 07:17:40.943234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:17:40.943272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:40.943309 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:40.943336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:40.943356 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:17:40.943372 kernel: Bridge firewalling registered Aug 13 07:17:40.943389 systemd-journald[193]: Journal started Aug 13 07:17:40.943432 systemd-journald[193]: Runtime Journal (/run/log/journal/1b62513d73ab41fd9a5523d8f52b11cb) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:17:40.902432 systemd-modules-load[194]: Inserted module 'overlay' Aug 13 07:17:40.949513 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:40.949548 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:17:40.940698 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 13 07:17:40.958235 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:17:40.960715 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:17:40.966193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:40.969763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:17:40.974608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:40.977265 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:40.980911 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:17:40.989111 dracut-cmdline[224]: dracut-dracut-053 Aug 13 07:17:40.992586 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:17:41.018348 systemd-resolved[232]: Positive Trust Anchors: Aug 13 07:17:41.018366 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:17:41.018396 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:17:41.021391 systemd-resolved[232]: Defaulting to hostname 'linux'. Aug 13 07:17:41.023183 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:17:41.027936 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:41.099863 kernel: SCSI subsystem initialized Aug 13 07:17:41.109846 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:17:41.119841 kernel: iscsi: registered transport (tcp) Aug 13 07:17:41.143865 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:17:41.143938 kernel: QLogic iSCSI HBA Driver Aug 13 07:17:41.202014 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:41.220997 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:17:41.245845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:17:41.245878 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:17:41.247413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:17:41.289857 kernel: raid6: avx2x4 gen() 29484 MB/s Aug 13 07:17:41.306858 kernel: raid6: avx2x2 gen() 30039 MB/s Aug 13 07:17:41.323935 kernel: raid6: avx2x1 gen() 25135 MB/s Aug 13 07:17:41.324007 kernel: raid6: using algorithm avx2x2 gen() 30039 MB/s Aug 13 07:17:41.341978 kernel: raid6: .... xor() 19236 MB/s, rmw enabled Aug 13 07:17:41.342095 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:17:41.362863 kernel: xor: automatically using best checksumming function avx Aug 13 07:17:41.520877 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:17:41.535795 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:41.543003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:41.557281 systemd-udevd[411]: Using default interface naming scheme 'v255'. Aug 13 07:17:41.562267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:41.570003 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:17:41.584313 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Aug 13 07:17:41.617910 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:41.626041 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:17:41.694919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:41.703971 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:17:41.716865 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:41.720672 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:41.723075 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:41.725417 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:17:41.730835 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:17:41.733283 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:17:41.734869 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:17:41.736043 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:17:41.742590 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:17:41.742615 kernel: GPT:9289727 != 19775487 Aug 13 07:17:41.742625 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:17:41.742635 kernel: GPT:9289727 != 19775487 Aug 13 07:17:41.742645 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:17:41.742655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:17:41.748852 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:17:41.752202 kernel: AES CTR mode by8 optimization enabled Aug 13 07:17:41.752596 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:41.764845 kernel: libata version 3.00 loaded. Aug 13 07:17:41.766755 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:41.766902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:41.774041 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:17:41.774256 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:17:41.775839 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:17:41.774312 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:41.782558 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:17:41.775612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:41.775777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:41.779560 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:41.790104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:41.796053 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (466) Aug 13 07:17:41.799830 kernel: scsi host0: ahci Aug 13 07:17:41.800035 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Aug 13 07:17:41.804384 kernel: scsi host1: ahci Aug 13 07:17:41.807851 kernel: scsi host2: ahci Aug 13 07:17:41.808048 kernel: scsi host3: ahci Aug 13 07:17:41.808211 kernel: scsi host4: ahci Aug 13 07:17:41.810139 kernel: scsi host5: ahci Aug 13 07:17:41.810341 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 07:17:41.810354 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 07:17:41.812489 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 07:17:41.812511 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 07:17:41.812522 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 07:17:41.814073 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 07:17:41.817272 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:17:41.822949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:17:41.833091 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:17:41.833174 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:17:41.841943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:17:41.855983 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:17:41.856058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:41.856112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:41.859298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:41.860959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:41.872959 disk-uuid[566]: Primary Header is updated. Aug 13 07:17:41.872959 disk-uuid[566]: Secondary Entries is updated. Aug 13 07:17:41.872959 disk-uuid[566]: Secondary Header is updated. Aug 13 07:17:41.876933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:17:41.880308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:41.883153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:17:41.889204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:17:41.912340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:42.121024 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:17:42.121120 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:17:42.121132 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:17:42.121836 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:17:42.122851 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:17:42.123848 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:17:42.124862 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:17:42.124889 kernel: ata3.00: applying bridge limits Aug 13 07:17:42.125842 kernel: ata3.00: configured for UDMA/100 Aug 13 07:17:42.127844 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:17:42.169848 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:17:42.170114 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:17:42.188836 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:17:42.882845 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:17:42.883179 disk-uuid[568]: The operation has completed successfully. Aug 13 07:17:42.913714 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:17:42.913892 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:17:42.937039 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:17:42.940528 sh[594]: Success Aug 13 07:17:42.953844 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:17:42.986532 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:17:42.996406 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:17:42.999415 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:17:43.012702 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:17:43.012743 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:43.012754 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:17:43.012773 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:17:43.013416 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:17:43.017894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:17:43.020381 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:17:43.039953 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:17:43.042659 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:17:43.051439 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:43.051467 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:43.051484 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:17:43.054846 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:17:43.064360 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:17:43.066118 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:43.075309 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:17:43.081976 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:17:43.142564 ignition[686]: Ignition 2.19.0 Aug 13 07:17:43.142579 ignition[686]: Stage: fetch-offline Aug 13 07:17:43.142617 ignition[686]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:43.142628 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:17:43.142736 ignition[686]: parsed url from cmdline: "" Aug 13 07:17:43.142741 ignition[686]: no config URL provided Aug 13 07:17:43.142746 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:17:43.142757 ignition[686]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:17:43.142786 ignition[686]: op(1): [started] loading QEMU firmware config module Aug 13 07:17:43.142805 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:17:43.152669 ignition[686]: op(1): [finished] loading QEMU firmware config module Aug 13 07:17:43.172740 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:17:43.178972 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:17:43.194732 ignition[686]: parsing config with SHA512: ce044e1538bf092f9ef822b4b3d15ee67df3b9238c6bfa8d39eb949dc1b23a654bd29f4b9609a6f98b6637011f6f340b192b61a5d95c739ccde97a58c849a3d9 Aug 13 07:17:43.200014 unknown[686]: fetched base config from "system" Aug 13 07:17:43.200152 unknown[686]: fetched user config from "qemu" Aug 13 07:17:43.201142 ignition[686]: fetch-offline: fetch-offline passed Aug 13 07:17:43.201225 ignition[686]: Ignition finished successfully Aug 13 07:17:43.204087 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:17:43.213685 systemd-networkd[782]: lo: Link UP Aug 13 07:17:43.213698 systemd-networkd[782]: lo: Gained carrier Aug 13 07:17:43.215852 systemd-networkd[782]: Enumeration completed Aug 13 07:17:43.215952 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:17:43.216360 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:43.216365 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:17:43.217391 systemd-networkd[782]: eth0: Link UP Aug 13 07:17:43.217395 systemd-networkd[782]: eth0: Gained carrier Aug 13 07:17:43.217404 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:43.218355 systemd[1]: Reached target network.target - Network. Aug 13 07:17:43.219326 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:17:43.230874 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:17:43.243047 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:17:43.258211 ignition[786]: Ignition 2.19.0 Aug 13 07:17:43.258221 ignition[786]: Stage: kargs Aug 13 07:17:43.258428 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:43.258441 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:17:43.259392 ignition[786]: kargs: kargs passed Aug 13 07:17:43.259438 ignition[786]: Ignition finished successfully Aug 13 07:17:43.263378 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:17:43.280996 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:17:43.352631 ignition[793]: Ignition 2.19.0 Aug 13 07:17:43.352642 ignition[793]: Stage: disks Aug 13 07:17:43.352840 ignition[793]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:43.352852 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:17:43.353604 ignition[793]: disks: disks passed Aug 13 07:17:43.353653 ignition[793]: Ignition finished successfully Aug 13 07:17:43.359290 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:17:43.361376 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:17:43.361459 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:17:43.364571 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:17:43.366556 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:17:43.366756 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:17:43.379949 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:17:43.395056 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:17:43.401716 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:17:43.421003 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:17:43.572845 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:17:43.573328 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:17:43.575655 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:17:43.589954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:17:43.592599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:17:43.595040 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:17:43.595115 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:17:43.595160 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:17:43.601842 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Aug 13 07:17:43.604020 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:43.604046 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:43.604065 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:17:43.605966 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:17:43.608632 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:17:43.609796 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:17:43.628005 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:17:43.663960 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:17:43.669627 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:17:43.675048 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:17:43.680339 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:17:43.775976 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:17:43.785098 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:17:43.788532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:17:43.793868 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:43.827404 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:17:43.829372 ignition[923]: INFO : Ignition 2.19.0 Aug 13 07:17:43.829372 ignition[923]: INFO : Stage: mount Aug 13 07:17:43.829372 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:43.829372 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:17:43.833235 ignition[923]: INFO : mount: mount passed Aug 13 07:17:43.833235 ignition[923]: INFO : Ignition finished successfully Aug 13 07:17:43.836684 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:17:43.848927 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:17:44.011332 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:17:44.029118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:17:44.037835 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Aug 13 07:17:44.039856 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:17:44.039879 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:17:44.039890 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:17:44.042848 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:17:44.045357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:17:44.073931 ignition[953]: INFO : Ignition 2.19.0 Aug 13 07:17:44.073931 ignition[953]: INFO : Stage: files Aug 13 07:17:44.075947 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:44.075947 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:17:44.075947 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:17:44.075947 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:17:44.075947 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:17:44.082389 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:17:44.082389 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:17:44.082389 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:17:44.082389 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:17:44.082389 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 07:17:44.079210 unknown[953]: wrote ssh authorized keys file for user: core Aug 13 07:17:44.196962 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:17:44.350497 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 07:17:44.352627 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:17:44.354408 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:17:44.356086 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:17:44.357712 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:17:44.357712 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:17:44.357712 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:17:44.357712 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:17:44.364524 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:17:44.364524 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:17:44.364524 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:17:44.364524 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:17:44.364524 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:17:44.364524 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:17:44.364524 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 07:17:44.767580 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:17:44.882193 systemd-networkd[782]: eth0: Gained IPv6LL Aug 13 07:17:45.326522 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 07:17:45.326522 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 07:17:45.330193 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:17:45.365612 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:17:45.416647 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:17:45.418380 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:17:45.418380 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:17:45.418380 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:17:45.418380 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:17:45.418380 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:17:45.418380 ignition[953]: INFO : files: files passed Aug 13 07:17:45.418380 ignition[953]: INFO : Ignition finished successfully Aug 13 07:17:45.429485 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:17:45.459958 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:17:45.461845 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:17:45.466364 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:17:45.466517 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:17:45.473905 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:17:45.477792 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:45.477792 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:45.480905 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:17:45.480929 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:17:45.482340 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:17:45.493031 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:17:45.524702 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:17:45.524854 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:17:45.526011 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:17:45.528036 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:17:45.528389 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:17:45.529248 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:17:45.551332 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:17:45.566048 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:17:45.577739 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:45.579153 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:45.581419 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:17:45.582527 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:17:45.582702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:17:45.587245 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:17:45.587441 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:17:45.589295 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:17:45.589598 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:17:45.589928 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:17:45.590401 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:17:45.590713 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:17:45.591239 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:17:45.591575 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:17:45.591910 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:17:45.592340 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:17:45.592499 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:17:45.608427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:45.608657 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:45.610547 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:17:45.610856 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:45.613831 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:17:45.613977 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:17:45.616899 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:17:45.617028 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:17:45.618167 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:17:45.620780 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:17:45.620951 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:45.624134 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:17:45.625245 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:17:45.628196 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:17:45.628328 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:17:45.629873 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:17:45.629990 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:17:45.630349 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:17:45.630495 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:17:45.633138 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:17:45.633276 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:17:45.647011 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:17:45.647117 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:17:45.647243 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:45.648413 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:17:45.653425 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:17:45.655420 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:45.657744 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:17:45.658988 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:17:45.664707 ignition[1009]: INFO : Ignition 2.19.0 Aug 13 07:17:45.664707 ignition[1009]: INFO : Stage: umount Aug 13 07:17:45.668550 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:17:45.668550 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:17:45.668550 ignition[1009]: INFO : umount: umount passed Aug 13 07:17:45.668550 ignition[1009]: INFO : Ignition finished successfully Aug 13 07:17:45.666094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:17:45.666228 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:17:45.675270 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:17:45.676381 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:17:45.679701 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:17:45.682247 systemd[1]: Stopped target network.target - Network. Aug 13 07:17:45.684616 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:17:45.685833 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:17:45.688490 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:17:45.688557 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:17:45.692269 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:17:45.693619 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:17:45.696145 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:17:45.696210 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:17:45.699844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:17:45.702599 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:17:45.708909 systemd-networkd[782]: eth0: DHCPv6 lease lost Aug 13 07:17:45.711342 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:17:45.711538 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:17:45.713725 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:17:45.713770 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:45.725020 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:17:45.725962 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:17:45.726032 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:17:45.728525 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:45.732561 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:17:45.732692 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:17:45.735650 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:17:45.735734 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:45.737019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:17:45.737071 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:45.739157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:17:45.739209 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:45.750341 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:17:45.750565 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:45.753024 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:17:45.753184 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:17:45.817782 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:17:45.817892 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:45.819561 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:17:45.819608 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:45.821497 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:17:45.821559 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:17:45.823971 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:17:45.824033 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:17:45.825555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:17:45.825608 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:17:45.833996 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:17:45.834084 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:17:45.834167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:45.834468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:45.834537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:45.842841 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:17:45.842982 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:17:45.880947 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:17:45.881117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:17:45.883422 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:17:45.885268 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:17:45.885327 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:17:45.895998 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:17:45.904293 systemd[1]: Switching root. Aug 13 07:17:45.942842 systemd-journald[193]: Journal stopped Aug 13 07:17:47.215561 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 13 07:17:47.215647 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:17:47.215661 kernel: SELinux: policy capability open_perms=1 Aug 13 07:17:47.215677 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:17:47.215695 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:17:47.215706 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:17:47.215717 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:17:47.215728 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:17:47.215740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:17:47.215752 kernel: audit: type=1403 audit(1755069466.396:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:17:47.215764 systemd[1]: Successfully loaded SELinux policy in 43.167ms. Aug 13 07:17:47.215794 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.196ms. Aug 13 07:17:47.215824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:17:47.215837 systemd[1]: Detected virtualization kvm. Aug 13 07:17:47.215849 systemd[1]: Detected architecture x86-64. Aug 13 07:17:47.215861 systemd[1]: Detected first boot. Aug 13 07:17:47.215873 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:17:47.215885 zram_generator::config[1054]: No configuration found. Aug 13 07:17:47.215909 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:17:47.215921 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:17:47.215936 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:17:47.215948 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:17:47.215962 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:17:47.215978 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:17:47.215993 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:17:47.216006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:17:47.216018 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:17:47.216030 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:17:47.216046 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:17:47.216066 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:17:47.216079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:17:47.216091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:17:47.216104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:17:47.216118 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:17:47.216132 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:17:47.216145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:17:47.216164 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:17:47.216179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:17:47.216192 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:17:47.216209 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:17:47.216221 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:17:47.216233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:17:47.216246 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:17:47.216258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:17:47.216270 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:17:47.216285 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:17:47.216297 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:17:47.216310 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:17:47.216321 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:17:47.216333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:17:47.216345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:17:47.216357 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:17:47.216369 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:17:47.216381 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:17:47.216395 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:17:47.216407 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:47.216419 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:17:47.216431 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:17:47.216442 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:17:47.216455 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:17:47.216471 systemd[1]: Reached target machines.target - Containers. Aug 13 07:17:47.216482 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:17:47.216494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:17:47.216510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:17:47.216521 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:17:47.216533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:17:47.216545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:17:47.216557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:17:47.216569 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:17:47.216580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:17:47.216593 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:17:47.216607 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:17:47.216619 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:17:47.216631 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:17:47.216643 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:17:47.216655 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:17:47.216666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:17:47.216678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:17:47.216690 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:17:47.216702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:17:47.216716 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:17:47.216732 systemd[1]: Stopped verity-setup.service. Aug 13 07:17:47.216744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:47.216756 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:17:47.216771 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:17:47.216783 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:17:47.216795 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:17:47.216807 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:17:47.216860 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:17:47.216872 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:17:47.216884 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:17:47.216896 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:17:47.216908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:17:47.216923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:17:47.216935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:17:47.216946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:17:47.216984 systemd-journald[1117]: Collecting audit messages is disabled. Aug 13 07:17:47.217009 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:17:47.217022 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:17:47.217034 systemd-journald[1117]: Journal started Aug 13 07:17:47.217068 systemd-journald[1117]: Runtime Journal (/run/log/journal/1b62513d73ab41fd9a5523d8f52b11cb) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:17:46.947849 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:17:46.970106 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:17:46.970636 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:17:47.221034 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:17:47.227973 kernel: loop: module loaded Aug 13 07:17:47.233748 kernel: fuse: init (API version 7.39) Aug 13 07:17:47.231983 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:17:47.232215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:17:47.233880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:17:47.235409 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:17:47.235582 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:17:47.237907 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:17:47.239582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:17:47.258443 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:17:47.268994 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:17:47.272194 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:17:47.272241 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:17:47.274332 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:17:47.276753 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:17:47.281972 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:17:47.351165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:17:47.355969 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:17:47.382839 kernel: ACPI: bus type drm_connector registered Aug 13 07:17:47.386086 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:17:47.387258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:17:47.390509 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:17:47.391755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:17:47.395866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:17:47.403615 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:17:47.407114 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:17:47.407376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:17:47.409967 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:17:47.412378 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:17:47.421046 systemd-journald[1117]: Time spent on flushing to /var/log/journal/1b62513d73ab41fd9a5523d8f52b11cb is 14.529ms for 994 entries. Aug 13 07:17:47.421046 systemd-journald[1117]: System Journal (/var/log/journal/1b62513d73ab41fd9a5523d8f52b11cb) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:17:47.460004 systemd-journald[1117]: Received client request to flush runtime journal. Aug 13 07:17:47.460060 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:17:47.415631 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:17:47.424246 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:17:47.432397 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:17:47.437079 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:17:47.449069 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:17:47.451886 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:17:47.454553 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:17:47.458358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:17:47.462542 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:17:47.478883 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:17:47.479589 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:17:47.491286 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:17:47.493801 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:17:47.505834 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:17:47.508983 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:17:47.516024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:17:47.541019 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 07:17:47.541524 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 07:17:47.542850 kernel: loop2: detected capacity change from 0 to 229808 Aug 13 07:17:47.551378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:17:47.585845 kernel: loop3: detected capacity change from 0 to 140768 Aug 13 07:17:47.598843 kernel: loop4: detected capacity change from 0 to 142488 Aug 13 07:17:47.611872 kernel: loop5: detected capacity change from 0 to 229808 Aug 13 07:17:47.618569 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:17:47.619376 (sd-merge)[1192]: Merged extensions into '/usr'. Aug 13 07:17:47.623262 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:17:47.623280 systemd[1]: Reloading... Aug 13 07:17:47.710845 zram_generator::config[1215]: No configuration found. Aug 13 07:17:47.897498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:47.901336 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:17:47.948991 systemd[1]: Reloading finished in 325 ms. Aug 13 07:17:47.982261 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:17:47.983859 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:17:48.020056 systemd[1]: Starting ensure-sysext.service... Aug 13 07:17:48.022114 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:17:48.029127 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:17:48.029149 systemd[1]: Reloading... Aug 13 07:17:48.072062 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:17:48.072455 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:17:48.074522 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:17:48.075294 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Aug 13 07:17:48.075996 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Aug 13 07:17:48.085802 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:17:48.085859 systemd-tmpfiles[1256]: Skipping /boot Aug 13 07:17:48.101864 zram_generator::config[1282]: No configuration found. Aug 13 07:17:48.107351 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:17:48.107369 systemd-tmpfiles[1256]: Skipping /boot Aug 13 07:17:48.223977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:48.273981 systemd[1]: Reloading finished in 244 ms. Aug 13 07:17:48.295772 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:17:48.297475 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:17:48.315595 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:17:48.319082 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:17:48.323193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:17:48.327777 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:17:48.331620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:17:48.337048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:17:48.346057 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:17:48.349849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:48.350142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:17:48.360152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:17:48.364326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:17:48.369340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:17:48.371193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:17:48.371505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:48.372992 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:17:48.374617 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Aug 13 07:17:48.375905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:17:48.376106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:17:48.379313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:17:48.379541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:17:48.383066 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:17:48.383297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:17:48.395692 augenrules[1348]: No rules Aug 13 07:17:48.399787 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:17:48.402474 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:17:48.408745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:48.409454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:17:48.417204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:17:48.423137 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:17:48.430229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:17:48.439391 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:17:48.440744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:17:48.444674 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:17:48.446947 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:17:48.448070 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:17:48.449574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:17:48.451736 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:17:48.453692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:17:48.453905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:17:48.455882 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:17:48.456108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:17:48.457957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:17:48.458160 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:17:48.460388 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:17:48.460578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:17:48.466302 systemd[1]: Finished ensure-sysext.service. Aug 13 07:17:48.467724 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:17:48.504704 systemd-resolved[1326]: Positive Trust Anchors: Aug 13 07:17:48.505414 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:17:48.505448 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:17:48.510189 systemd-resolved[1326]: Defaulting to hostname 'linux'. Aug 13 07:17:48.511187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:17:48.512365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:17:48.512449 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:17:48.515937 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:17:48.517187 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:17:48.517348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:17:48.519131 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:17:48.520086 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:17:48.522898 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1364) Aug 13 07:17:48.608848 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:17:48.613903 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:17:48.615084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:17:48.616836 systemd-networkd[1393]: lo: Link UP Aug 13 07:17:48.617200 systemd-networkd[1393]: lo: Gained carrier Aug 13 07:17:48.619068 systemd-networkd[1393]: Enumeration completed Aug 13 07:17:48.624053 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:17:48.624510 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:48.624522 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:17:48.626967 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:17:48.627391 systemd-networkd[1393]: eth0: Link UP Aug 13 07:17:48.627461 systemd-networkd[1393]: eth0: Gained carrier Aug 13 07:17:48.627523 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:17:48.628332 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:17:48.630047 systemd[1]: Reached target network.target - Network. Aug 13 07:17:48.631167 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:17:48.636857 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 07:17:48.642237 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:17:48.642466 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:17:48.643464 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:17:48.641916 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:17:48.642034 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:17:48.644636 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Aug 13 07:17:48.646057 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:17:48.646110 systemd-timesyncd[1395]: Initial clock synchronization to Wed 2025-08-13 07:17:48.377367 UTC. Aug 13 07:17:48.649056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:17:48.650310 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:17:48.704850 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:17:48.746684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:48.751246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:17:48.751700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:48.760116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:17:48.778889 kernel: kvm_amd: TSC scaling supported Aug 13 07:17:48.778936 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:17:48.778969 kernel: kvm_amd: Nested Paging enabled Aug 13 07:17:48.779874 kernel: kvm_amd: LBR virtualization supported Aug 13 07:17:48.779891 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:17:48.780848 kernel: kvm_amd: Virtual GIF supported Aug 13 07:17:48.803838 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:17:48.847501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:17:48.850914 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:17:48.863972 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:17:48.873563 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:17:48.906154 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:17:48.907838 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:17:48.909068 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:17:48.910331 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:17:48.911670 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:17:48.913245 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:17:48.914802 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:17:48.916145 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:17:48.917386 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:17:48.917431 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:17:48.918362 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:17:48.920470 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:17:48.924122 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:17:48.933625 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:17:48.936489 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:17:48.938223 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:17:48.939416 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:17:48.940392 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:17:48.941386 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:17:48.941427 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:17:48.942797 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:17:48.945254 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:17:48.947900 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:17:48.949920 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:17:48.952152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:17:48.953173 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:17:48.955979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:17:48.960087 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:17:48.963718 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:17:48.974015 jq[1429]: false Aug 13 07:17:48.968205 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:17:48.972964 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:17:48.974475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:17:48.974920 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:17:48.980971 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:17:48.988933 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:17:48.990362 dbus-daemon[1428]: [system] SELinux support is enabled Aug 13 07:17:48.991694 extend-filesystems[1430]: Found loop3 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found loop4 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found loop5 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found sr0 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda1 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda2 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda3 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found usr Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda4 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda6 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda7 Aug 13 07:17:48.991694 extend-filesystems[1430]: Found vda9 Aug 13 07:17:48.991694 extend-filesystems[1430]: Checking size of /dev/vda9 Aug 13 07:17:48.991669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:17:49.005324 extend-filesystems[1430]: Resized partition /dev/vda9 Aug 13 07:17:48.996555 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:17:49.000497 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:17:49.005724 jq[1442]: true Aug 13 07:17:49.001724 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:17:49.002140 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:17:49.002334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:17:49.017358 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:17:49.017631 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:17:49.017924 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:17:49.023942 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1371) Aug 13 07:17:49.030357 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:17:49.030432 update_engine[1438]: I20250813 07:17:49.025930 1438 main.cc:92] Flatcar Update Engine starting Aug 13 07:17:49.030432 update_engine[1438]: I20250813 07:17:49.027283 1438 update_check_scheduler.cc:74] Next update check in 4m55s Aug 13 07:17:49.043489 jq[1455]: true Aug 13 07:17:49.053848 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:17:49.055314 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:17:49.074115 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:17:49.086160 tar[1454]: linux-amd64/LICENSE Aug 13 07:17:49.076413 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:17:49.124100 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:17:49.124218 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:17:49.124218 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:17:49.124218 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:17:49.127510 tar[1454]: linux-amd64/helm Aug 13 07:17:49.076454 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:17:49.127664 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Aug 13 07:17:49.077740 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:17:49.077756 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:17:49.087182 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:17:49.087205 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:17:49.088766 systemd-logind[1436]: New seat seat0. Aug 13 07:17:49.122293 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:17:49.123708 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:17:49.125156 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:17:49.125375 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:17:49.148660 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:17:49.158053 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:17:49.165697 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:17:49.165920 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:17:49.169918 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:17:49.273670 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:17:49.275556 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:17:49.280420 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:17:49.286258 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:17:49.293985 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:17:49.295256 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:17:49.296847 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:17:49.300070 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:17:50.035123 systemd-networkd[1393]: eth0: Gained IPv6LL Aug 13 07:17:50.044742 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:17:50.047576 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:17:50.057292 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:17:50.071777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:50.075924 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:17:50.109651 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:17:50.110038 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:17:50.137485 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:17:50.145980 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:17:50.254230 containerd[1456]: time="2025-08-13T07:17:50.254136766Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:17:50.287101 containerd[1456]: time="2025-08-13T07:17:50.286958337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289313361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289343467Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289373711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289576165Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289592637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289660661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289672796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289903043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289918854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289931098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290183 containerd[1456]: time="2025-08-13T07:17:50.289939879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290414 containerd[1456]: time="2025-08-13T07:17:50.290041023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290414 containerd[1456]: time="2025-08-13T07:17:50.290325387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290499 containerd[1456]: time="2025-08-13T07:17:50.290472479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:17:50.290499 containerd[1456]: time="2025-08-13T07:17:50.290492005Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:17:50.290627 containerd[1456]: time="2025-08-13T07:17:50.290596709Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:17:50.290730 containerd[1456]: time="2025-08-13T07:17:50.290707723Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:17:50.483459 tar[1454]: linux-amd64/README.md Aug 13 07:17:50.503097 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:17:50.691331 containerd[1456]: time="2025-08-13T07:17:50.691231618Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:17:50.691523 containerd[1456]: time="2025-08-13T07:17:50.691359602Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:17:50.691523 containerd[1456]: time="2025-08-13T07:17:50.691380169Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:17:50.691523 containerd[1456]: time="2025-08-13T07:17:50.691411132Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:17:50.691523 containerd[1456]: time="2025-08-13T07:17:50.691446733Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:17:50.691716 containerd[1456]: time="2025-08-13T07:17:50.691686052Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:17:50.692035 containerd[1456]: time="2025-08-13T07:17:50.691996497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:17:50.692149 containerd[1456]: time="2025-08-13T07:17:50.692129829Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:17:50.692171 containerd[1456]: time="2025-08-13T07:17:50.692149822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:17:50.692171 containerd[1456]: time="2025-08-13T07:17:50.692162454Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:17:50.692208 containerd[1456]: time="2025-08-13T07:17:50.692174192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692239 containerd[1456]: time="2025-08-13T07:17:50.692223174Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692258 containerd[1456]: time="2025-08-13T07:17:50.692245472Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692287 containerd[1456]: time="2025-08-13T07:17:50.692276649Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692313 containerd[1456]: time="2025-08-13T07:17:50.692297819Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692336 containerd[1456]: time="2025-08-13T07:17:50.692314039Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692336 containerd[1456]: time="2025-08-13T07:17:50.692327450Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692370 containerd[1456]: time="2025-08-13T07:17:50.692343592Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:17:50.692388 containerd[1456]: time="2025-08-13T07:17:50.692375148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692407 containerd[1456]: time="2025-08-13T07:17:50.692388422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692435 containerd[1456]: time="2025-08-13T07:17:50.692404943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692435 containerd[1456]: time="2025-08-13T07:17:50.692422341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692501 containerd[1456]: time="2025-08-13T07:17:50.692439533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692501 containerd[1456]: time="2025-08-13T07:17:50.692463077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692501 containerd[1456]: time="2025-08-13T07:17:50.692479113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692558 containerd[1456]: time="2025-08-13T07:17:50.692502325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692558 containerd[1456]: time="2025-08-13T07:17:50.692516678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692558 containerd[1456]: time="2025-08-13T07:17:50.692532354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692613 containerd[1456]: time="2025-08-13T07:17:50.692560351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692613 containerd[1456]: time="2025-08-13T07:17:50.692579138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692613 containerd[1456]: time="2025-08-13T07:17:50.692592519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692613 containerd[1456]: time="2025-08-13T07:17:50.692610782Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:17:50.692686 containerd[1456]: time="2025-08-13T07:17:50.692629044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692686 containerd[1456]: time="2025-08-13T07:17:50.692647988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692686 containerd[1456]: time="2025-08-13T07:17:50.692660182Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:17:50.692741 containerd[1456]: time="2025-08-13T07:17:50.692717556Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:17:50.692768 containerd[1456]: time="2025-08-13T07:17:50.692741032Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:17:50.692768 containerd[1456]: time="2025-08-13T07:17:50.692751854Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:17:50.692852 containerd[1456]: time="2025-08-13T07:17:50.692773024Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:17:50.692852 containerd[1456]: time="2025-08-13T07:17:50.692792017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.692852 containerd[1456]: time="2025-08-13T07:17:50.692807955Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:17:50.692852 containerd[1456]: time="2025-08-13T07:17:50.692832685Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:17:50.692852 containerd[1456]: time="2025-08-13T07:17:50.692845744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:17:50.694091 containerd[1456]: time="2025-08-13T07:17:50.693224493Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:17:50.694456 containerd[1456]: time="2025-08-13T07:17:50.694419951Z" level=info msg="Connect containerd service" Aug 13 07:17:50.694532 containerd[1456]: time="2025-08-13T07:17:50.694506917Z" level=info msg="using legacy CRI server" Aug 13 07:17:50.694579 containerd[1456]: time="2025-08-13T07:17:50.694525735Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:17:50.694916 containerd[1456]: time="2025-08-13T07:17:50.694893076Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:17:50.695827 containerd[1456]: time="2025-08-13T07:17:50.695789458Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:17:50.696040 containerd[1456]: time="2025-08-13T07:17:50.695969945Z" level=info msg="Start subscribing containerd event" Aug 13 07:17:50.696091 containerd[1456]: time="2025-08-13T07:17:50.696070729Z" level=info msg="Start recovering state" Aug 13 07:17:50.696238 containerd[1456]: time="2025-08-13T07:17:50.696217384Z" level=info msg="Start event monitor" Aug 13 07:17:50.697151 containerd[1456]: time="2025-08-13T07:17:50.696849397Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:17:50.697192 containerd[1456]: time="2025-08-13T07:17:50.696738217Z" level=info msg="Start snapshots syncer" Aug 13 07:17:50.697192 containerd[1456]: time="2025-08-13T07:17:50.697181108Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:17:50.697192 containerd[1456]: time="2025-08-13T07:17:50.697192486Z" level=info msg="Start streaming server" Aug 13 07:17:50.697297 containerd[1456]: time="2025-08-13T07:17:50.697206237Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:17:50.697407 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:17:50.697916 containerd[1456]: time="2025-08-13T07:17:50.697846700Z" level=info msg="containerd successfully booted in 0.445000s" Aug 13 07:17:51.497726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:51.499597 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:17:51.500925 systemd[1]: Startup finished in 1.116s (kernel) + 5.688s (initrd) + 5.146s (userspace) = 11.951s. Aug 13 07:17:51.503176 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:17:52.195880 kubelet[1542]: E0813 07:17:52.195782 1542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:17:52.200353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:17:52.200597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:17:52.201032 systemd[1]: kubelet.service: Consumed 1.925s CPU time. Aug 13 07:17:53.435735 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:17:53.437131 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:37786.service - OpenSSH per-connection server daemon (10.0.0.1:37786). Aug 13 07:17:53.480770 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 37786 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:53.482916 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:53.492198 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:17:53.504144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:17:53.506557 systemd-logind[1436]: New session 1 of user core. Aug 13 07:17:53.517998 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:17:53.521031 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:17:53.530394 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:17:53.652695 systemd[1559]: Queued start job for default target default.target. Aug 13 07:17:53.664312 systemd[1559]: Created slice app.slice - User Application Slice. Aug 13 07:17:53.664341 systemd[1559]: Reached target paths.target - Paths. Aug 13 07:17:53.664355 systemd[1559]: Reached target timers.target - Timers. Aug 13 07:17:53.666087 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:17:53.678671 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:17:53.678858 systemd[1559]: Reached target sockets.target - Sockets. Aug 13 07:17:53.678882 systemd[1559]: Reached target basic.target - Basic System. Aug 13 07:17:53.678927 systemd[1559]: Reached target default.target - Main User Target. Aug 13 07:17:53.678969 systemd[1559]: Startup finished in 141ms. Aug 13 07:17:53.679457 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:17:53.681319 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:17:53.746301 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:37794.service - OpenSSH per-connection server daemon (10.0.0.1:37794). Aug 13 07:17:53.801318 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 37794 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:53.803144 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:53.807629 systemd-logind[1436]: New session 2 of user core. Aug 13 07:17:53.818114 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:17:53.873432 sshd[1570]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:53.880841 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:37794.service: Deactivated successfully. Aug 13 07:17:53.882713 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:17:53.884125 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:17:53.885515 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:37808.service - OpenSSH per-connection server daemon (10.0.0.1:37808). Aug 13 07:17:53.886287 systemd-logind[1436]: Removed session 2. Aug 13 07:17:53.940965 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 37808 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:53.943023 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:53.947437 systemd-logind[1436]: New session 3 of user core. Aug 13 07:17:53.957003 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:17:54.007286 sshd[1577]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:54.021070 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:37808.service: Deactivated successfully. Aug 13 07:17:54.023149 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:17:54.024591 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:17:54.034323 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:37820.service - OpenSSH per-connection server daemon (10.0.0.1:37820). Aug 13 07:17:54.035566 systemd-logind[1436]: Removed session 3. Aug 13 07:17:54.063040 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 37820 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:54.064723 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:54.068935 systemd-logind[1436]: New session 4 of user core. Aug 13 07:17:54.078967 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:17:54.135484 sshd[1584]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:54.148385 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:37820.service: Deactivated successfully. Aug 13 07:17:54.151006 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:17:54.153145 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:17:54.165245 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:37824.service - OpenSSH per-connection server daemon (10.0.0.1:37824). Aug 13 07:17:54.166497 systemd-logind[1436]: Removed session 4. Aug 13 07:17:54.193121 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 37824 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:54.194791 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:54.199351 systemd-logind[1436]: New session 5 of user core. Aug 13 07:17:54.216206 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:17:54.276625 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:17:54.277007 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:54.293897 sudo[1594]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:54.296343 sshd[1591]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:54.305283 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:37824.service: Deactivated successfully. Aug 13 07:17:54.307543 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:17:54.309045 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:17:54.320166 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:37830.service - OpenSSH per-connection server daemon (10.0.0.1:37830). Aug 13 07:17:54.321265 systemd-logind[1436]: Removed session 5. Aug 13 07:17:54.349377 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 37830 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:54.351492 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:54.356081 systemd-logind[1436]: New session 6 of user core. Aug 13 07:17:54.366108 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:17:54.422917 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:17:54.423295 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:54.427406 sudo[1603]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:54.434554 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:17:54.435092 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:54.455035 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:17:54.457242 auditctl[1606]: No rules Aug 13 07:17:54.458722 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:17:54.459007 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:17:54.460886 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:17:54.510290 augenrules[1624]: No rules Aug 13 07:17:54.512302 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:17:54.513607 sudo[1602]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:54.515490 sshd[1599]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:54.528108 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:37830.service: Deactivated successfully. Aug 13 07:17:54.530531 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:17:54.532070 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:17:54.533476 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:37840.service - OpenSSH per-connection server daemon (10.0.0.1:37840). Aug 13 07:17:54.534310 systemd-logind[1436]: Removed session 6. Aug 13 07:17:54.566584 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 37840 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:54.568308 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:54.572889 systemd-logind[1436]: New session 7 of user core. Aug 13 07:17:54.582949 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:17:54.638345 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:17:54.638751 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:17:55.205089 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:17:55.205409 (dockerd)[1653]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:17:55.751112 dockerd[1653]: time="2025-08-13T07:17:55.751024946Z" level=info msg="Starting up" Aug 13 07:17:56.296412 dockerd[1653]: time="2025-08-13T07:17:56.296350609Z" level=info msg="Loading containers: start." Aug 13 07:17:56.424916 kernel: Initializing XFRM netlink socket Aug 13 07:17:56.503503 systemd-networkd[1393]: docker0: Link UP Aug 13 07:17:56.522456 dockerd[1653]: time="2025-08-13T07:17:56.522402269Z" level=info msg="Loading containers: done." Aug 13 07:17:56.539551 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3856196281-merged.mount: Deactivated successfully. Aug 13 07:17:56.540843 dockerd[1653]: time="2025-08-13T07:17:56.540775538Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:17:56.540948 dockerd[1653]: time="2025-08-13T07:17:56.540920744Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:17:56.541053 dockerd[1653]: time="2025-08-13T07:17:56.541031376Z" level=info msg="Daemon has completed initialization" Aug 13 07:17:56.578528 dockerd[1653]: time="2025-08-13T07:17:56.578396683Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:17:56.578727 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:17:57.246243 containerd[1456]: time="2025-08-13T07:17:57.246187048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 07:17:58.891910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846038619.mount: Deactivated successfully. Aug 13 07:18:00.062674 containerd[1456]: time="2025-08-13T07:18:00.062593312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:00.063327 containerd[1456]: time="2025-08-13T07:18:00.063276312Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 07:18:00.064374 containerd[1456]: time="2025-08-13T07:18:00.064343447Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:00.067239 containerd[1456]: time="2025-08-13T07:18:00.067203814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:00.068313 containerd[1456]: time="2025-08-13T07:18:00.068250659Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 2.822010936s" Aug 13 07:18:00.068313 containerd[1456]: time="2025-08-13T07:18:00.068305635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 07:18:00.069331 containerd[1456]: time="2025-08-13T07:18:00.069306371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 07:18:01.649588 containerd[1456]: time="2025-08-13T07:18:01.649518356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:01.650385 containerd[1456]: time="2025-08-13T07:18:01.650330529Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 07:18:01.651471 containerd[1456]: time="2025-08-13T07:18:01.651437960Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:01.654230 containerd[1456]: time="2025-08-13T07:18:01.654179918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:01.655341 containerd[1456]: time="2025-08-13T07:18:01.655297250Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.585960956s" Aug 13 07:18:01.655382 containerd[1456]: time="2025-08-13T07:18:01.655344338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 07:18:01.655878 containerd[1456]: time="2025-08-13T07:18:01.655857841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 07:18:02.421014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:18:02.443113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:02.777777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:02.783942 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:18:02.946384 kubelet[1873]: E0813 07:18:02.946286 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:18:02.952746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:18:02.952976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:18:03.761098 containerd[1456]: time="2025-08-13T07:18:03.761014463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:03.762383 containerd[1456]: time="2025-08-13T07:18:03.762312902Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 07:18:03.763760 containerd[1456]: time="2025-08-13T07:18:03.763716642Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:03.767265 containerd[1456]: time="2025-08-13T07:18:03.767230387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:03.768318 containerd[1456]: time="2025-08-13T07:18:03.768282146Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 2.112328021s" Aug 13 07:18:03.768318 containerd[1456]: time="2025-08-13T07:18:03.768318385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 07:18:03.768894 containerd[1456]: time="2025-08-13T07:18:03.768861622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 07:18:05.490126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1084209347.mount: Deactivated successfully. Aug 13 07:18:06.277000 containerd[1456]: time="2025-08-13T07:18:06.276913447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:06.277717 containerd[1456]: time="2025-08-13T07:18:06.277669947Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 07:18:06.278959 containerd[1456]: time="2025-08-13T07:18:06.278882657Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:06.280881 containerd[1456]: time="2025-08-13T07:18:06.280843253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:06.281473 containerd[1456]: time="2025-08-13T07:18:06.281435315Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.512539939s" Aug 13 07:18:06.281507 containerd[1456]: time="2025-08-13T07:18:06.281471118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 07:18:06.282138 containerd[1456]: time="2025-08-13T07:18:06.282093992Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 07:18:06.829048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502836832.mount: Deactivated successfully. Aug 13 07:18:08.190764 containerd[1456]: time="2025-08-13T07:18:08.190637495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:08.192095 containerd[1456]: time="2025-08-13T07:18:08.191587111Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 07:18:08.193186 containerd[1456]: time="2025-08-13T07:18:08.193133720Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:08.197310 containerd[1456]: time="2025-08-13T07:18:08.197227635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:08.199636 containerd[1456]: time="2025-08-13T07:18:08.199516873Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.917359697s" Aug 13 07:18:08.199798 containerd[1456]: time="2025-08-13T07:18:08.199652197Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 07:18:08.200565 containerd[1456]: time="2025-08-13T07:18:08.200527092Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:18:08.648426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775325180.mount: Deactivated successfully. Aug 13 07:18:08.654646 containerd[1456]: time="2025-08-13T07:18:08.654552722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:08.655408 containerd[1456]: time="2025-08-13T07:18:08.655296969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:18:08.656608 containerd[1456]: time="2025-08-13T07:18:08.656537607Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:08.659410 containerd[1456]: time="2025-08-13T07:18:08.659357610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:08.660655 containerd[1456]: time="2025-08-13T07:18:08.660600198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 460.034874ms" Aug 13 07:18:08.660655 containerd[1456]: time="2025-08-13T07:18:08.660640256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:18:08.661313 containerd[1456]: time="2025-08-13T07:18:08.661276408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 07:18:09.223555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403396661.mount: Deactivated successfully. Aug 13 07:18:11.271216 containerd[1456]: time="2025-08-13T07:18:11.271113535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.272324 containerd[1456]: time="2025-08-13T07:18:11.272229052Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 07:18:11.273747 containerd[1456]: time="2025-08-13T07:18:11.273682031Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.277610 containerd[1456]: time="2025-08-13T07:18:11.277568583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.278752 containerd[1456]: time="2025-08-13T07:18:11.278717704Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.61740233s" Aug 13 07:18:11.278752 containerd[1456]: time="2025-08-13T07:18:11.278755647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 07:18:13.170888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:18:13.177977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:13.397596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:13.402072 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:18:13.448453 kubelet[2033]: E0813 07:18:13.448320 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:18:13.452862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:18:13.453069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:18:14.740897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:14.753043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:14.783145 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit session-7.scope)... Aug 13 07:18:14.783166 systemd[1]: Reloading... Aug 13 07:18:14.874847 zram_generator::config[2090]: No configuration found. Aug 13 07:18:15.494433 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:18:15.574461 systemd[1]: Reloading finished in 790 ms. Aug 13 07:18:15.625104 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:18:15.625220 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:18:15.625491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:15.627171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:15.798392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:15.803419 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:18:15.850662 kubelet[2135]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:18:15.850662 kubelet[2135]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:18:15.850662 kubelet[2135]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:18:15.851119 kubelet[2135]: I0813 07:18:15.850684 2135 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:18:16.981956 kubelet[2135]: I0813 07:18:16.981878 2135 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:18:16.981956 kubelet[2135]: I0813 07:18:16.981936 2135 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:18:16.984087 kubelet[2135]: I0813 07:18:16.983169 2135 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:18:17.012554 kubelet[2135]: E0813 07:18:17.012454 2135 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 07:18:17.012554 kubelet[2135]: I0813 07:18:17.012519 2135 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:18:17.023102 kubelet[2135]: E0813 07:18:17.023042 2135 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:18:17.023102 kubelet[2135]: I0813 07:18:17.023096 2135 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:18:17.029505 kubelet[2135]: I0813 07:18:17.029468 2135 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:18:17.029871 kubelet[2135]: I0813 07:18:17.029795 2135 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:18:17.030081 kubelet[2135]: I0813 07:18:17.029858 2135 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:18:17.030081 kubelet[2135]: I0813 07:18:17.030084 2135 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:18:17.030253 kubelet[2135]: I0813 07:18:17.030096 2135 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:18:17.031203 kubelet[2135]: I0813 07:18:17.031167 2135 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:18:17.033899 kubelet[2135]: I0813 07:18:17.033863 2135 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:18:17.033899 kubelet[2135]: I0813 07:18:17.033883 2135 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:18:17.033991 kubelet[2135]: I0813 07:18:17.033926 2135 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:18:17.033991 kubelet[2135]: I0813 07:18:17.033946 2135 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:18:17.040778 kubelet[2135]: I0813 07:18:17.040718 2135 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:18:17.041437 kubelet[2135]: I0813 07:18:17.041412 2135 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:18:17.042525 kubelet[2135]: E0813 07:18:17.042483 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:18:17.043109 kubelet[2135]: W0813 07:18:17.043063 2135 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:18:17.043306 kubelet[2135]: E0813 07:18:17.043278 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 07:18:17.046958 kubelet[2135]: I0813 07:18:17.046919 2135 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:18:17.047032 kubelet[2135]: I0813 07:18:17.047014 2135 server.go:1289] "Started kubelet" Aug 13 07:18:17.047975 kubelet[2135]: I0813 07:18:17.047913 2135 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:18:17.048497 kubelet[2135]: I0813 07:18:17.048451 2135 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:18:17.048615 kubelet[2135]: I0813 07:18:17.048517 2135 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:18:17.048700 kubelet[2135]: I0813 07:18:17.048679 2135 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:18:17.050026 kubelet[2135]: I0813 07:18:17.049709 2135 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:18:17.051615 kubelet[2135]: E0813 07:18:17.050669 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:18:17.051615 kubelet[2135]: I0813 07:18:17.050713 2135 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:18:17.051615 kubelet[2135]: I0813 07:18:17.050992 2135 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:18:17.051615 kubelet[2135]: I0813 07:18:17.051102 2135 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:18:17.051615 kubelet[2135]: E0813 07:18:17.051603 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:18:17.051987 kubelet[2135]: I0813 07:18:17.051934 2135 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:18:17.052124 kubelet[2135]: I0813 07:18:17.052028 2135 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:18:17.052519 kubelet[2135]: I0813 07:18:17.052485 2135 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:18:17.054970 kubelet[2135]: E0813 07:18:17.054592 2135 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:18:17.055475 kubelet[2135]: I0813 07:18:17.055451 2135 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:18:17.056147 kubelet[2135]: E0813 07:18:17.056109 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Aug 13 07:18:17.056198 kubelet[2135]: E0813 07:18:17.053527 2135 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b4268a79b76b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:18:17.046955701 +0000 UTC m=+1.238325461,LastTimestamp:2025-08-13 07:18:17.046955701 +0000 UTC m=+1.238325461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:18:17.070770 kubelet[2135]: I0813 07:18:17.070726 2135 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:18:17.070770 kubelet[2135]: I0813 07:18:17.070758 2135 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:18:17.070770 kubelet[2135]: I0813 07:18:17.070793 2135 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:18:17.074554 kubelet[2135]: I0813 07:18:17.074501 2135 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:18:17.076181 kubelet[2135]: I0813 07:18:17.075975 2135 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:18:17.076181 kubelet[2135]: I0813 07:18:17.076007 2135 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:18:17.076181 kubelet[2135]: I0813 07:18:17.076031 2135 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:18:17.076181 kubelet[2135]: I0813 07:18:17.076042 2135 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:18:17.076181 kubelet[2135]: E0813 07:18:17.076093 2135 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:18:17.076313 kubelet[2135]: I0813 07:18:17.076296 2135 policy_none.go:49] "None policy: Start" Aug 13 07:18:17.076335 kubelet[2135]: I0813 07:18:17.076327 2135 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:18:17.076367 kubelet[2135]: I0813 07:18:17.076345 2135 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:18:17.077376 kubelet[2135]: E0813 07:18:17.077344 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:18:17.084144 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:18:17.112367 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:18:17.115752 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:18:17.126293 kubelet[2135]: E0813 07:18:17.126234 2135 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:18:17.126612 kubelet[2135]: I0813 07:18:17.126582 2135 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:18:17.126771 kubelet[2135]: I0813 07:18:17.126613 2135 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:18:17.127345 kubelet[2135]: I0813 07:18:17.126975 2135 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:18:17.128092 kubelet[2135]: E0813 07:18:17.128069 2135 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:18:17.128162 kubelet[2135]: E0813 07:18:17.128127 2135 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:18:17.187944 systemd[1]: Created slice kubepods-burstable-pod8a01567077f282b41594e5eb67c8159f.slice - libcontainer container kubepods-burstable-pod8a01567077f282b41594e5eb67c8159f.slice. Aug 13 07:18:17.205182 kubelet[2135]: E0813 07:18:17.205131 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:17.210409 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice - libcontainer container kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 13 07:18:17.212263 kubelet[2135]: E0813 07:18:17.212228 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:17.214294 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice - libcontainer container kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 13 07:18:17.216023 kubelet[2135]: E0813 07:18:17.215979 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:17.228381 kubelet[2135]: I0813 07:18:17.228338 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:18:17.228786 kubelet[2135]: E0813 07:18:17.228759 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Aug 13 07:18:17.254029 kubelet[2135]: I0813 07:18:17.253313 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a01567077f282b41594e5eb67c8159f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a01567077f282b41594e5eb67c8159f\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:17.254029 kubelet[2135]: I0813 07:18:17.253374 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a01567077f282b41594e5eb67c8159f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a01567077f282b41594e5eb67c8159f\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:17.254029 kubelet[2135]: I0813 07:18:17.253411 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:17.254029 kubelet[2135]: I0813 07:18:17.253431 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:17.254029 kubelet[2135]: I0813 07:18:17.253463 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:17.254423 kubelet[2135]: I0813 07:18:17.253568 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a01567077f282b41594e5eb67c8159f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a01567077f282b41594e5eb67c8159f\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:17.254423 kubelet[2135]: I0813 07:18:17.253628 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:17.254423 kubelet[2135]: I0813 07:18:17.253650 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:17.254423 kubelet[2135]: I0813 07:18:17.253672 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:18:17.256945 kubelet[2135]: E0813 07:18:17.256905 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Aug 13 07:18:17.430514 kubelet[2135]: I0813 07:18:17.430437 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:18:17.430953 kubelet[2135]: E0813 07:18:17.430897 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Aug 13 07:18:17.505748 kubelet[2135]: E0813 07:18:17.505588 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:17.506479 containerd[1456]: time="2025-08-13T07:18:17.506426049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a01567077f282b41594e5eb67c8159f,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:17.512767 kubelet[2135]: E0813 07:18:17.512711 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:17.513318 containerd[1456]: time="2025-08-13T07:18:17.513270132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:17.516537 kubelet[2135]: E0813 07:18:17.516510 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:17.517030 containerd[1456]: time="2025-08-13T07:18:17.516988286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:17.658224 kubelet[2135]: E0813 07:18:17.658180 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Aug 13 07:18:17.832380 kubelet[2135]: I0813 07:18:17.832236 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:18:17.832670 kubelet[2135]: E0813 07:18:17.832642 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Aug 13 07:18:17.963586 kubelet[2135]: E0813 07:18:17.963506 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 07:18:18.014687 kubelet[2135]: E0813 07:18:18.014616 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 07:18:18.039532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850136555.mount: Deactivated successfully. Aug 13 07:18:18.045529 containerd[1456]: time="2025-08-13T07:18:18.045452533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:18:18.046496 containerd[1456]: time="2025-08-13T07:18:18.046457105Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:18:18.047399 containerd[1456]: time="2025-08-13T07:18:18.047352609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:18:18.048396 containerd[1456]: time="2025-08-13T07:18:18.048347830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:18:18.049173 containerd[1456]: time="2025-08-13T07:18:18.049101086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:18:18.049939 containerd[1456]: time="2025-08-13T07:18:18.049908346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:18:18.050948 containerd[1456]: time="2025-08-13T07:18:18.050906381Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:18:18.054413 containerd[1456]: time="2025-08-13T07:18:18.054361628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:18:18.055118 containerd[1456]: time="2025-08-13T07:18:18.055083597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.015064ms" Aug 13 07:18:18.057648 containerd[1456]: time="2025-08-13T07:18:18.057617510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.099269ms" Aug 13 07:18:18.059269 containerd[1456]: time="2025-08-13T07:18:18.059238537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 545.880991ms" Aug 13 07:18:18.194062 containerd[1456]: time="2025-08-13T07:18:18.193947584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:18.194062 containerd[1456]: time="2025-08-13T07:18:18.194012110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:18.194062 containerd[1456]: time="2025-08-13T07:18:18.193950688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:18.194062 containerd[1456]: time="2025-08-13T07:18:18.194009768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:18.194062 containerd[1456]: time="2025-08-13T07:18:18.194022002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:18.194616 containerd[1456]: time="2025-08-13T07:18:18.194431935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:18.195496 containerd[1456]: time="2025-08-13T07:18:18.195312191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:18.195496 containerd[1456]: time="2025-08-13T07:18:18.195399784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:18.198611 containerd[1456]: time="2025-08-13T07:18:18.198510335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:18.198658 containerd[1456]: time="2025-08-13T07:18:18.198633679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:18.198724 containerd[1456]: time="2025-08-13T07:18:18.198663104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:18.198900 containerd[1456]: time="2025-08-13T07:18:18.198850034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:18.227026 systemd[1]: Started cri-containerd-2d70b42ee6328a816414e99d065ab6a1ae74b44f1fc3c90a1feaa52a393300af.scope - libcontainer container 2d70b42ee6328a816414e99d065ab6a1ae74b44f1fc3c90a1feaa52a393300af. Aug 13 07:18:18.232054 systemd[1]: Started cri-containerd-808f4f8d569dee26d851098e6854eabfcb49436e84a01f4863ced285a808cc3a.scope - libcontainer container 808f4f8d569dee26d851098e6854eabfcb49436e84a01f4863ced285a808cc3a. Aug 13 07:18:18.234267 systemd[1]: Started cri-containerd-c75515038a5c2f26785708436a6ef1a1ce4b2066da611fc5a2cbeedde0997f1d.scope - libcontainer container c75515038a5c2f26785708436a6ef1a1ce4b2066da611fc5a2cbeedde0997f1d. Aug 13 07:18:18.270916 containerd[1456]: time="2025-08-13T07:18:18.270704198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d70b42ee6328a816414e99d065ab6a1ae74b44f1fc3c90a1feaa52a393300af\"" Aug 13 07:18:18.274425 kubelet[2135]: E0813 07:18:18.274241 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:18.280942 containerd[1456]: time="2025-08-13T07:18:18.279210209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a01567077f282b41594e5eb67c8159f,Namespace:kube-system,Attempt:0,} returns sandbox id \"808f4f8d569dee26d851098e6854eabfcb49436e84a01f4863ced285a808cc3a\"" Aug 13 07:18:18.280942 containerd[1456]: time="2025-08-13T07:18:18.279929295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"c75515038a5c2f26785708436a6ef1a1ce4b2066da611fc5a2cbeedde0997f1d\"" Aug 13 07:18:18.281125 containerd[1456]: time="2025-08-13T07:18:18.281095939Z" level=info msg="CreateContainer within sandbox \"2d70b42ee6328a816414e99d065ab6a1ae74b44f1fc3c90a1feaa52a393300af\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:18:18.281890 kubelet[2135]: E0813 07:18:18.281858 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:18.282310 kubelet[2135]: E0813 07:18:18.282294 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:18.286759 kubelet[2135]: E0813 07:18:18.286735 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 07:18:18.287200 containerd[1456]: time="2025-08-13T07:18:18.287166853Z" level=info msg="CreateContainer within sandbox \"808f4f8d569dee26d851098e6854eabfcb49436e84a01f4863ced285a808cc3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:18:18.294617 containerd[1456]: time="2025-08-13T07:18:18.294580839Z" level=info msg="CreateContainer within sandbox \"c75515038a5c2f26785708436a6ef1a1ce4b2066da611fc5a2cbeedde0997f1d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:18:18.310563 containerd[1456]: time="2025-08-13T07:18:18.310509766Z" level=info msg="CreateContainer within sandbox \"808f4f8d569dee26d851098e6854eabfcb49436e84a01f4863ced285a808cc3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9bd661461c3793f180c8f54b29fa105cdd09b8dc541f4a14b0db22d0a3dbe58\"" Aug 13 07:18:18.311099 containerd[1456]: time="2025-08-13T07:18:18.311068503Z" level=info msg="StartContainer for \"d9bd661461c3793f180c8f54b29fa105cdd09b8dc541f4a14b0db22d0a3dbe58\"" Aug 13 07:18:18.318089 containerd[1456]: time="2025-08-13T07:18:18.318006698Z" level=info msg="CreateContainer within sandbox \"2d70b42ee6328a816414e99d065ab6a1ae74b44f1fc3c90a1feaa52a393300af\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"18a97e2079ed2254d3c358a3277235391345fd43e3f39fe380b1efd0e2cded02\"" Aug 13 07:18:18.318561 containerd[1456]: time="2025-08-13T07:18:18.318537243Z" level=info msg="StartContainer for \"18a97e2079ed2254d3c358a3277235391345fd43e3f39fe380b1efd0e2cded02\"" Aug 13 07:18:18.319549 containerd[1456]: time="2025-08-13T07:18:18.319514973Z" level=info msg="CreateContainer within sandbox \"c75515038a5c2f26785708436a6ef1a1ce4b2066da611fc5a2cbeedde0997f1d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38d7dc95e58ca59ee7109ec3289e68b03e98aca464b1175eb04f8090450caa71\"" Aug 13 07:18:18.320966 containerd[1456]: time="2025-08-13T07:18:18.320044897Z" level=info msg="StartContainer for \"38d7dc95e58ca59ee7109ec3289e68b03e98aca464b1175eb04f8090450caa71\"" Aug 13 07:18:18.339257 systemd[1]: Started cri-containerd-d9bd661461c3793f180c8f54b29fa105cdd09b8dc541f4a14b0db22d0a3dbe58.scope - libcontainer container d9bd661461c3793f180c8f54b29fa105cdd09b8dc541f4a14b0db22d0a3dbe58. Aug 13 07:18:18.345066 systemd[1]: Started cri-containerd-38d7dc95e58ca59ee7109ec3289e68b03e98aca464b1175eb04f8090450caa71.scope - libcontainer container 38d7dc95e58ca59ee7109ec3289e68b03e98aca464b1175eb04f8090450caa71. Aug 13 07:18:18.350652 systemd[1]: Started cri-containerd-18a97e2079ed2254d3c358a3277235391345fd43e3f39fe380b1efd0e2cded02.scope - libcontainer container 18a97e2079ed2254d3c358a3277235391345fd43e3f39fe380b1efd0e2cded02. Aug 13 07:18:18.391994 containerd[1456]: time="2025-08-13T07:18:18.391948930Z" level=info msg="StartContainer for \"38d7dc95e58ca59ee7109ec3289e68b03e98aca464b1175eb04f8090450caa71\" returns successfully" Aug 13 07:18:18.392967 containerd[1456]: time="2025-08-13T07:18:18.392228399Z" level=info msg="StartContainer for \"d9bd661461c3793f180c8f54b29fa105cdd09b8dc541f4a14b0db22d0a3dbe58\" returns successfully" Aug 13 07:18:18.409746 containerd[1456]: time="2025-08-13T07:18:18.409693803Z" level=info msg="StartContainer for \"18a97e2079ed2254d3c358a3277235391345fd43e3f39fe380b1efd0e2cded02\" returns successfully" Aug 13 07:18:18.634826 kubelet[2135]: I0813 07:18:18.634685 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:18:19.088730 kubelet[2135]: E0813 07:18:19.088678 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:19.089309 kubelet[2135]: E0813 07:18:19.088844 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:19.093828 kubelet[2135]: E0813 07:18:19.091164 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:19.093828 kubelet[2135]: E0813 07:18:19.091254 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:19.093828 kubelet[2135]: E0813 07:18:19.093801 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:19.093959 kubelet[2135]: E0813 07:18:19.093935 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:20.096768 kubelet[2135]: E0813 07:18:20.096710 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:20.097410 kubelet[2135]: E0813 07:18:20.096839 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:20.097410 kubelet[2135]: E0813 07:18:20.096906 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:20.097718 kubelet[2135]: E0813 07:18:20.097677 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:21.100891 kubelet[2135]: E0813 07:18:21.100651 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:21.100891 kubelet[2135]: E0813 07:18:21.100795 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:21.101491 kubelet[2135]: E0813 07:18:21.101176 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:18:21.101491 kubelet[2135]: E0813 07:18:21.101296 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:21.552744 kubelet[2135]: E0813 07:18:21.552673 2135 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:18:21.650838 kubelet[2135]: I0813 07:18:21.650761 2135 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:18:21.650838 kubelet[2135]: E0813 07:18:21.650800 2135 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:18:21.656568 kubelet[2135]: I0813 07:18:21.656545 2135 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:21.694420 kubelet[2135]: E0813 07:18:21.694289 2135 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185b4268a79b76b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:18:17.046955701 +0000 UTC m=+1.238325461,LastTimestamp:2025-08-13 07:18:17.046955701 +0000 UTC m=+1.238325461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:18:21.761783 kubelet[2135]: E0813 07:18:21.761722 2135 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:21.761783 kubelet[2135]: I0813 07:18:21.761770 2135 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:21.764427 kubelet[2135]: E0813 07:18:21.764058 2135 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:21.764427 kubelet[2135]: I0813 07:18:21.764084 2135 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:18:21.767293 kubelet[2135]: E0813 07:18:21.767257 2135 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 07:18:22.046079 kubelet[2135]: I0813 07:18:22.046013 2135 apiserver.go:52] "Watching apiserver" Aug 13 07:18:22.051651 kubelet[2135]: I0813 07:18:22.051604 2135 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:18:23.752967 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-7.scope)... Aug 13 07:18:23.752997 systemd[1]: Reloading... Aug 13 07:18:23.843845 zram_generator::config[2470]: No configuration found. Aug 13 07:18:23.948846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:18:24.044030 systemd[1]: Reloading finished in 290 ms. Aug 13 07:18:24.088988 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:24.113318 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:18:24.113671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:24.113729 systemd[1]: kubelet.service: Consumed 1.178s CPU time, 133.5M memory peak, 0B memory swap peak. Aug 13 07:18:24.125385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:24.305522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:24.310626 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:18:24.349838 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:18:24.349838 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:18:24.349838 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:18:24.350257 kubelet[2512]: I0813 07:18:24.349873 2512 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:18:24.357262 kubelet[2512]: I0813 07:18:24.357222 2512 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 07:18:24.357262 kubelet[2512]: I0813 07:18:24.357247 2512 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:18:24.357434 kubelet[2512]: I0813 07:18:24.357416 2512 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 07:18:24.358571 kubelet[2512]: I0813 07:18:24.358550 2512 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 07:18:24.361389 kubelet[2512]: I0813 07:18:24.361361 2512 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:18:24.364047 kubelet[2512]: E0813 07:18:24.364022 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:18:24.364047 kubelet[2512]: I0813 07:18:24.364044 2512 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:18:24.369395 kubelet[2512]: I0813 07:18:24.368999 2512 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:18:24.369395 kubelet[2512]: I0813 07:18:24.369211 2512 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:18:24.369524 kubelet[2512]: I0813 07:18:24.369246 2512 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:18:24.369524 kubelet[2512]: I0813 07:18:24.369512 2512 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:18:24.369524 kubelet[2512]: I0813 07:18:24.369523 2512 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 07:18:24.369689 kubelet[2512]: I0813 07:18:24.369570 2512 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:18:24.369779 kubelet[2512]: I0813 07:18:24.369748 2512 kubelet.go:480] "Attempting to sync node with API server" Aug 13 07:18:24.369779 kubelet[2512]: I0813 07:18:24.369767 2512 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:18:24.369872 kubelet[2512]: I0813 07:18:24.369790 2512 kubelet.go:386] "Adding apiserver pod source" Aug 13 07:18:24.369872 kubelet[2512]: I0813 07:18:24.369803 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:18:24.371018 kubelet[2512]: I0813 07:18:24.370996 2512 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:18:24.371415 kubelet[2512]: I0813 07:18:24.371385 2512 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 07:18:24.376758 kubelet[2512]: I0813 07:18:24.376104 2512 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:18:24.376758 kubelet[2512]: I0813 07:18:24.376158 2512 server.go:1289] "Started kubelet" Aug 13 07:18:24.376758 kubelet[2512]: I0813 07:18:24.376543 2512 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:18:24.377224 kubelet[2512]: I0813 07:18:24.377172 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:18:24.377645 kubelet[2512]: I0813 07:18:24.377617 2512 server.go:317] "Adding debug handlers to kubelet server" Aug 13 07:18:24.377802 kubelet[2512]: I0813 07:18:24.377783 2512 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:18:24.380283 kubelet[2512]: I0813 07:18:24.379594 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:18:24.380283 kubelet[2512]: I0813 07:18:24.379981 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:18:24.383049 kubelet[2512]: I0813 07:18:24.383024 2512 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:18:24.383173 kubelet[2512]: I0813 07:18:24.383117 2512 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:18:24.384024 kubelet[2512]: I0813 07:18:24.383240 2512 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:18:24.385407 kubelet[2512]: I0813 07:18:24.385371 2512 factory.go:223] Registration of the systemd container factory successfully Aug 13 07:18:24.385808 kubelet[2512]: I0813 07:18:24.385695 2512 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:18:24.386746 kubelet[2512]: E0813 07:18:24.386727 2512 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:18:24.387463 kubelet[2512]: I0813 07:18:24.387429 2512 factory.go:223] Registration of the containerd container factory successfully Aug 13 07:18:24.396538 kubelet[2512]: I0813 07:18:24.396491 2512 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 07:18:24.398486 kubelet[2512]: I0813 07:18:24.398458 2512 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 07:18:24.398759 kubelet[2512]: I0813 07:18:24.398738 2512 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 07:18:24.398806 kubelet[2512]: I0813 07:18:24.398767 2512 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:18:24.398806 kubelet[2512]: I0813 07:18:24.398776 2512 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 07:18:24.398868 kubelet[2512]: E0813 07:18:24.398837 2512 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:18:24.426620 kubelet[2512]: I0813 07:18:24.426589 2512 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:18:24.426620 kubelet[2512]: I0813 07:18:24.426612 2512 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:18:24.426620 kubelet[2512]: I0813 07:18:24.426630 2512 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:18:24.426807 kubelet[2512]: I0813 07:18:24.426743 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:18:24.426807 kubelet[2512]: I0813 07:18:24.426759 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:18:24.426807 kubelet[2512]: I0813 07:18:24.426774 2512 policy_none.go:49] "None policy: Start" Aug 13 07:18:24.426807 kubelet[2512]: I0813 07:18:24.426783 2512 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:18:24.426807 kubelet[2512]: I0813 07:18:24.426794 2512 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:18:24.426932 kubelet[2512]: I0813 07:18:24.426893 2512 state_mem.go:75] "Updated machine memory state" Aug 13 07:18:24.430626 kubelet[2512]: E0813 07:18:24.430605 2512 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 07:18:24.430889 kubelet[2512]: I0813 07:18:24.430775 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:18:24.430889 kubelet[2512]: I0813 07:18:24.430790 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:18:24.431270 kubelet[2512]: I0813 07:18:24.431015 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:18:24.431993 kubelet[2512]: E0813 07:18:24.431969 2512 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:18:24.500312 kubelet[2512]: I0813 07:18:24.500281 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:24.500467 kubelet[2512]: I0813 07:18:24.500355 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:18:24.500573 kubelet[2512]: I0813 07:18:24.500507 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:24.537440 kubelet[2512]: I0813 07:18:24.537414 2512 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:18:24.544775 kubelet[2512]: I0813 07:18:24.544716 2512 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 07:18:24.544775 kubelet[2512]: I0813 07:18:24.544788 2512 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:18:24.585209 kubelet[2512]: I0813 07:18:24.585060 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a01567077f282b41594e5eb67c8159f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a01567077f282b41594e5eb67c8159f\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:24.585209 kubelet[2512]: I0813 07:18:24.585094 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a01567077f282b41594e5eb67c8159f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a01567077f282b41594e5eb67c8159f\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:24.585209 kubelet[2512]: I0813 07:18:24.585115 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a01567077f282b41594e5eb67c8159f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a01567077f282b41594e5eb67c8159f\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:24.585209 kubelet[2512]: I0813 07:18:24.585133 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:24.585209 kubelet[2512]: I0813 07:18:24.585148 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:24.585456 kubelet[2512]: I0813 07:18:24.585163 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:24.585456 kubelet[2512]: I0813 07:18:24.585205 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:24.585456 kubelet[2512]: I0813 07:18:24.585242 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:18:24.585456 kubelet[2512]: I0813 07:18:24.585261 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:24.805459 kubelet[2512]: E0813 07:18:24.805427 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:24.807376 kubelet[2512]: E0813 07:18:24.807323 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:24.807376 kubelet[2512]: E0813 07:18:24.807341 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:25.370231 kubelet[2512]: I0813 07:18:25.370166 2512 apiserver.go:52] "Watching apiserver" Aug 13 07:18:25.383599 kubelet[2512]: I0813 07:18:25.383562 2512 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:18:25.408976 kubelet[2512]: I0813 07:18:25.408943 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:25.409121 kubelet[2512]: I0813 07:18:25.409067 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:18:25.409121 kubelet[2512]: I0813 07:18:25.409094 2512 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:25.415833 kubelet[2512]: E0813 07:18:25.415763 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:18:25.415968 kubelet[2512]: E0813 07:18:25.415929 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:25.416259 kubelet[2512]: E0813 07:18:25.416226 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:18:25.416628 kubelet[2512]: E0813 07:18:25.416356 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:25.416628 kubelet[2512]: E0813 07:18:25.416225 2512 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:18:25.416628 kubelet[2512]: E0813 07:18:25.416558 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:25.429334 kubelet[2512]: I0813 07:18:25.429266 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.429246106 podStartE2EDuration="1.429246106s" podCreationTimestamp="2025-08-13 07:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:25.429155566 +0000 UTC m=+1.113866045" watchObservedRunningTime="2025-08-13 07:18:25.429246106 +0000 UTC m=+1.113956586" Aug 13 07:18:25.443218 kubelet[2512]: I0813 07:18:25.443154 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.44313694 podStartE2EDuration="1.44313694s" podCreationTimestamp="2025-08-13 07:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:25.436121679 +0000 UTC m=+1.120832148" watchObservedRunningTime="2025-08-13 07:18:25.44313694 +0000 UTC m=+1.127847419" Aug 13 07:18:25.449134 kubelet[2512]: I0813 07:18:25.449090 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.449078831 podStartE2EDuration="1.449078831s" podCreationTimestamp="2025-08-13 07:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:25.443199414 +0000 UTC m=+1.127909903" watchObservedRunningTime="2025-08-13 07:18:25.449078831 +0000 UTC m=+1.133789310" Aug 13 07:18:26.410025 kubelet[2512]: E0813 07:18:26.409956 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:26.410025 kubelet[2512]: E0813 07:18:26.409993 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:26.410735 kubelet[2512]: E0813 07:18:26.410194 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:27.411304 kubelet[2512]: E0813 07:18:27.411266 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:27.411304 kubelet[2512]: E0813 07:18:27.411312 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:29.789199 kubelet[2512]: E0813 07:18:29.789129 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:30.396552 kubelet[2512]: I0813 07:18:30.396502 2512 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:18:30.399196 containerd[1456]: time="2025-08-13T07:18:30.397185984Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:18:30.399716 kubelet[2512]: I0813 07:18:30.397922 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:18:30.418603 kubelet[2512]: E0813 07:18:30.417936 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:31.313904 systemd[1]: Created slice kubepods-besteffort-pod00e55a6c_d586_4e07_9932_6b258c727342.slice - libcontainer container kubepods-besteffort-pod00e55a6c_d586_4e07_9932_6b258c727342.slice. Aug 13 07:18:31.331406 kubelet[2512]: I0813 07:18:31.331327 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00e55a6c-d586-4e07-9932-6b258c727342-var-lib-calico\") pod \"tigera-operator-747864d56d-282s5\" (UID: \"00e55a6c-d586-4e07-9932-6b258c727342\") " pod="tigera-operator/tigera-operator-747864d56d-282s5" Aug 13 07:18:31.331406 kubelet[2512]: I0813 07:18:31.331406 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dhx\" (UniqueName: \"kubernetes.io/projected/00e55a6c-d586-4e07-9932-6b258c727342-kube-api-access-p7dhx\") pod \"tigera-operator-747864d56d-282s5\" (UID: \"00e55a6c-d586-4e07-9932-6b258c727342\") " pod="tigera-operator/tigera-operator-747864d56d-282s5" Aug 13 07:18:31.392003 systemd[1]: Created slice kubepods-besteffort-podbfdc9cc0_76d0_4462_a33b_9b2616ef826e.slice - libcontainer container kubepods-besteffort-podbfdc9cc0_76d0_4462_a33b_9b2616ef826e.slice. Aug 13 07:18:31.432215 kubelet[2512]: I0813 07:18:31.432143 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfdc9cc0-76d0-4462-a33b-9b2616ef826e-lib-modules\") pod \"kube-proxy-4b96r\" (UID: \"bfdc9cc0-76d0-4462-a33b-9b2616ef826e\") " pod="kube-system/kube-proxy-4b96r" Aug 13 07:18:31.432215 kubelet[2512]: I0813 07:18:31.432209 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfdc9cc0-76d0-4462-a33b-9b2616ef826e-xtables-lock\") pod \"kube-proxy-4b96r\" (UID: \"bfdc9cc0-76d0-4462-a33b-9b2616ef826e\") " pod="kube-system/kube-proxy-4b96r" Aug 13 07:18:31.432491 kubelet[2512]: I0813 07:18:31.432279 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bfdc9cc0-76d0-4462-a33b-9b2616ef826e-kube-proxy\") pod \"kube-proxy-4b96r\" (UID: \"bfdc9cc0-76d0-4462-a33b-9b2616ef826e\") " pod="kube-system/kube-proxy-4b96r" Aug 13 07:18:31.432491 kubelet[2512]: I0813 07:18:31.432311 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvvz\" (UniqueName: \"kubernetes.io/projected/bfdc9cc0-76d0-4462-a33b-9b2616ef826e-kube-api-access-bdvvz\") pod \"kube-proxy-4b96r\" (UID: \"bfdc9cc0-76d0-4462-a33b-9b2616ef826e\") " pod="kube-system/kube-proxy-4b96r" Aug 13 07:18:31.436965 kubelet[2512]: E0813 07:18:31.436917 2512 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:18:31.436965 kubelet[2512]: E0813 07:18:31.436952 2512 projected.go:194] Error preparing data for projected volume kube-api-access-p7dhx for pod tigera-operator/tigera-operator-747864d56d-282s5: configmap "kube-root-ca.crt" not found Aug 13 07:18:31.437110 kubelet[2512]: E0813 07:18:31.437050 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00e55a6c-d586-4e07-9932-6b258c727342-kube-api-access-p7dhx podName:00e55a6c-d586-4e07-9932-6b258c727342 nodeName:}" failed. No retries permitted until 2025-08-13 07:18:31.937020849 +0000 UTC m=+7.621731328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7dhx" (UniqueName: "kubernetes.io/projected/00e55a6c-d586-4e07-9932-6b258c727342-kube-api-access-p7dhx") pod "tigera-operator-747864d56d-282s5" (UID: "00e55a6c-d586-4e07-9932-6b258c727342") : configmap "kube-root-ca.crt" not found Aug 13 07:18:31.697620 kubelet[2512]: E0813 07:18:31.697559 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:31.698356 containerd[1456]: time="2025-08-13T07:18:31.698295893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4b96r,Uid:bfdc9cc0-76d0-4462-a33b-9b2616ef826e,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:31.727329 containerd[1456]: time="2025-08-13T07:18:31.727042032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:31.727329 containerd[1456]: time="2025-08-13T07:18:31.727125857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:31.727329 containerd[1456]: time="2025-08-13T07:18:31.727142589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:31.727329 containerd[1456]: time="2025-08-13T07:18:31.727310689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:31.758110 systemd[1]: Started cri-containerd-16cfc8afe8d9627fcd10facf530e269ef0fad2cdf683e0a2843cdc487ebc517a.scope - libcontainer container 16cfc8afe8d9627fcd10facf530e269ef0fad2cdf683e0a2843cdc487ebc517a. Aug 13 07:18:31.784668 containerd[1456]: time="2025-08-13T07:18:31.784620128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4b96r,Uid:bfdc9cc0-76d0-4462-a33b-9b2616ef826e,Namespace:kube-system,Attempt:0,} returns sandbox id \"16cfc8afe8d9627fcd10facf530e269ef0fad2cdf683e0a2843cdc487ebc517a\"" Aug 13 07:18:31.785611 kubelet[2512]: E0813 07:18:31.785584 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:31.793108 containerd[1456]: time="2025-08-13T07:18:31.793055793Z" level=info msg="CreateContainer within sandbox \"16cfc8afe8d9627fcd10facf530e269ef0fad2cdf683e0a2843cdc487ebc517a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:18:31.812118 containerd[1456]: time="2025-08-13T07:18:31.812067751Z" level=info msg="CreateContainer within sandbox \"16cfc8afe8d9627fcd10facf530e269ef0fad2cdf683e0a2843cdc487ebc517a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"022e10576b7c2025935fea6d6ebf88ba4b5e1269389d9f9997c09d3c5cc4f2b9\"" Aug 13 07:18:31.812855 containerd[1456]: time="2025-08-13T07:18:31.812650492Z" level=info msg="StartContainer for \"022e10576b7c2025935fea6d6ebf88ba4b5e1269389d9f9997c09d3c5cc4f2b9\"" Aug 13 07:18:31.847089 systemd[1]: Started cri-containerd-022e10576b7c2025935fea6d6ebf88ba4b5e1269389d9f9997c09d3c5cc4f2b9.scope - libcontainer container 022e10576b7c2025935fea6d6ebf88ba4b5e1269389d9f9997c09d3c5cc4f2b9. Aug 13 07:18:31.878900 containerd[1456]: time="2025-08-13T07:18:31.878157480Z" level=info msg="StartContainer for \"022e10576b7c2025935fea6d6ebf88ba4b5e1269389d9f9997c09d3c5cc4f2b9\" returns successfully" Aug 13 07:18:32.224075 containerd[1456]: time="2025-08-13T07:18:32.224025710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-282s5,Uid:00e55a6c-d586-4e07-9932-6b258c727342,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:18:32.248010 containerd[1456]: time="2025-08-13T07:18:32.247864706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:32.248010 containerd[1456]: time="2025-08-13T07:18:32.247937499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:32.248010 containerd[1456]: time="2025-08-13T07:18:32.247952389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:32.249180 containerd[1456]: time="2025-08-13T07:18:32.249063572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:32.268281 systemd[1]: Started cri-containerd-b43ff13a5470c3d29edbb3813187046b94ea69817bd940afbd5bd9a6914c2ebf.scope - libcontainer container b43ff13a5470c3d29edbb3813187046b94ea69817bd940afbd5bd9a6914c2ebf. Aug 13 07:18:32.304115 containerd[1456]: time="2025-08-13T07:18:32.304059879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-282s5,Uid:00e55a6c-d586-4e07-9932-6b258c727342,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b43ff13a5470c3d29edbb3813187046b94ea69817bd940afbd5bd9a6914c2ebf\"" Aug 13 07:18:32.306395 containerd[1456]: time="2025-08-13T07:18:32.306219292Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:18:32.423934 kubelet[2512]: E0813 07:18:32.423898 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:32.432094 kubelet[2512]: I0813 07:18:32.432004 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4b96r" podStartSLOduration=1.4319376529999999 podStartE2EDuration="1.431937653s" podCreationTimestamp="2025-08-13 07:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:32.431805655 +0000 UTC m=+8.116516134" watchObservedRunningTime="2025-08-13 07:18:32.431937653 +0000 UTC m=+8.116648132" Aug 13 07:18:33.990284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437215669.mount: Deactivated successfully. Aug 13 07:18:34.551170 update_engine[1438]: I20250813 07:18:34.551069 1438 update_attempter.cc:509] Updating boot flags... Aug 13 07:18:34.582418 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2829) Aug 13 07:18:34.681923 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2829) Aug 13 07:18:34.704847 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2829) Aug 13 07:18:35.060428 containerd[1456]: time="2025-08-13T07:18:35.060349339Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:35.061168 containerd[1456]: time="2025-08-13T07:18:35.061100881Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:18:35.062267 containerd[1456]: time="2025-08-13T07:18:35.062218894Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:35.064475 containerd[1456]: time="2025-08-13T07:18:35.064428631Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:35.065035 containerd[1456]: time="2025-08-13T07:18:35.064996856Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.758745742s" Aug 13 07:18:35.065035 containerd[1456]: time="2025-08-13T07:18:35.065030362Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:18:35.069943 containerd[1456]: time="2025-08-13T07:18:35.069862848Z" level=info msg="CreateContainer within sandbox \"b43ff13a5470c3d29edbb3813187046b94ea69817bd940afbd5bd9a6914c2ebf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:18:35.083410 containerd[1456]: time="2025-08-13T07:18:35.083345034Z" level=info msg="CreateContainer within sandbox \"b43ff13a5470c3d29edbb3813187046b94ea69817bd940afbd5bd9a6914c2ebf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2a6521f1706a628909144fee4c9c4af693c1ec5c672c11fc70477a3e06af7985\"" Aug 13 07:18:35.084043 containerd[1456]: time="2025-08-13T07:18:35.083992182Z" level=info msg="StartContainer for \"2a6521f1706a628909144fee4c9c4af693c1ec5c672c11fc70477a3e06af7985\"" Aug 13 07:18:35.122073 systemd[1]: Started cri-containerd-2a6521f1706a628909144fee4c9c4af693c1ec5c672c11fc70477a3e06af7985.scope - libcontainer container 2a6521f1706a628909144fee4c9c4af693c1ec5c672c11fc70477a3e06af7985. Aug 13 07:18:35.151805 containerd[1456]: time="2025-08-13T07:18:35.151722323Z" level=info msg="StartContainer for \"2a6521f1706a628909144fee4c9c4af693c1ec5c672c11fc70477a3e06af7985\" returns successfully" Aug 13 07:18:35.260784 kubelet[2512]: E0813 07:18:35.260720 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:35.430120 kubelet[2512]: E0813 07:18:35.430071 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:36.968314 kubelet[2512]: E0813 07:18:36.968240 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:36.997930 kubelet[2512]: I0813 07:18:36.997803 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-282s5" podStartSLOduration=3.23746702 podStartE2EDuration="5.997779967s" podCreationTimestamp="2025-08-13 07:18:31 +0000 UTC" firstStartedPulling="2025-08-13 07:18:32.305548791 +0000 UTC m=+7.990259270" lastFinishedPulling="2025-08-13 07:18:35.065861738 +0000 UTC m=+10.750572217" observedRunningTime="2025-08-13 07:18:35.438456935 +0000 UTC m=+11.123167434" watchObservedRunningTime="2025-08-13 07:18:36.997779967 +0000 UTC m=+12.682490447" Aug 13 07:18:37.442046 kubelet[2512]: E0813 07:18:37.441987 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:40.622025 sudo[1635]: pam_unix(sudo:session): session closed for user root Aug 13 07:18:40.628782 sshd[1632]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:40.632479 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:18:40.633919 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:37840.service: Deactivated successfully. Aug 13 07:18:40.637497 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:18:40.637783 systemd[1]: session-7.scope: Consumed 6.174s CPU time, 161.6M memory peak, 0B memory swap peak. Aug 13 07:18:40.640141 systemd-logind[1436]: Removed session 7. Aug 13 07:18:43.168527 systemd[1]: Created slice kubepods-besteffort-pode5c36183_7da0_47a3_ab4c_2bb37905e6b1.slice - libcontainer container kubepods-besteffort-pode5c36183_7da0_47a3_ab4c_2bb37905e6b1.slice. Aug 13 07:18:43.211195 kubelet[2512]: I0813 07:18:43.211056 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5c36183-7da0-47a3-ab4c-2bb37905e6b1-tigera-ca-bundle\") pod \"calico-typha-6569f49d87-pwffd\" (UID: \"e5c36183-7da0-47a3-ab4c-2bb37905e6b1\") " pod="calico-system/calico-typha-6569f49d87-pwffd" Aug 13 07:18:43.212113 kubelet[2512]: I0813 07:18:43.211561 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6c7\" (UniqueName: \"kubernetes.io/projected/e5c36183-7da0-47a3-ab4c-2bb37905e6b1-kube-api-access-9b6c7\") pod \"calico-typha-6569f49d87-pwffd\" (UID: \"e5c36183-7da0-47a3-ab4c-2bb37905e6b1\") " pod="calico-system/calico-typha-6569f49d87-pwffd" Aug 13 07:18:43.212585 kubelet[2512]: I0813 07:18:43.212533 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5c36183-7da0-47a3-ab4c-2bb37905e6b1-typha-certs\") pod \"calico-typha-6569f49d87-pwffd\" (UID: \"e5c36183-7da0-47a3-ab4c-2bb37905e6b1\") " pod="calico-system/calico-typha-6569f49d87-pwffd" Aug 13 07:18:43.246694 systemd[1]: Created slice kubepods-besteffort-pod68ca8a7c_584e_4434_b8f2_3454f3ab773b.slice - libcontainer container kubepods-besteffort-pod68ca8a7c_584e_4434_b8f2_3454f3ab773b.slice. Aug 13 07:18:43.314144 kubelet[2512]: I0813 07:18:43.313270 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68ca8a7c-584e-4434-b8f2-3454f3ab773b-tigera-ca-bundle\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314144 kubelet[2512]: I0813 07:18:43.313331 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-cni-net-dir\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314144 kubelet[2512]: I0813 07:18:43.313354 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/68ca8a7c-584e-4434-b8f2-3454f3ab773b-node-certs\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314144 kubelet[2512]: I0813 07:18:43.313384 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-flexvol-driver-host\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314144 kubelet[2512]: I0813 07:18:43.313418 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-var-run-calico\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314629 kubelet[2512]: I0813 07:18:43.313436 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-cni-log-dir\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314629 kubelet[2512]: I0813 07:18:43.313452 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-cni-bin-dir\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314629 kubelet[2512]: I0813 07:18:43.313467 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb98w\" (UniqueName: \"kubernetes.io/projected/68ca8a7c-584e-4434-b8f2-3454f3ab773b-kube-api-access-cb98w\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314629 kubelet[2512]: I0813 07:18:43.313490 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-policysync\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314629 kubelet[2512]: I0813 07:18:43.313506 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-var-lib-calico\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314750 kubelet[2512]: I0813 07:18:43.313519 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-xtables-lock\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.314750 kubelet[2512]: I0813 07:18:43.313533 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68ca8a7c-584e-4434-b8f2-3454f3ab773b-lib-modules\") pod \"calico-node-2hwbc\" (UID: \"68ca8a7c-584e-4434-b8f2-3454f3ab773b\") " pod="calico-system/calico-node-2hwbc" Aug 13 07:18:43.396221 kubelet[2512]: E0813 07:18:43.396153 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcjzr" podUID="0ec0a1a1-c8b0-4122-ab58-78229dc90d73" Aug 13 07:18:43.416230 kubelet[2512]: I0813 07:18:43.414999 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ec0a1a1-c8b0-4122-ab58-78229dc90d73-kubelet-dir\") pod \"csi-node-driver-fcjzr\" (UID: \"0ec0a1a1-c8b0-4122-ab58-78229dc90d73\") " pod="calico-system/csi-node-driver-fcjzr" Aug 13 07:18:43.416230 kubelet[2512]: I0813 07:18:43.415095 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0ec0a1a1-c8b0-4122-ab58-78229dc90d73-registration-dir\") pod \"csi-node-driver-fcjzr\" (UID: \"0ec0a1a1-c8b0-4122-ab58-78229dc90d73\") " pod="calico-system/csi-node-driver-fcjzr" Aug 13 07:18:43.416230 kubelet[2512]: I0813 07:18:43.415115 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4gpq\" (UniqueName: \"kubernetes.io/projected/0ec0a1a1-c8b0-4122-ab58-78229dc90d73-kube-api-access-p4gpq\") pod \"csi-node-driver-fcjzr\" (UID: \"0ec0a1a1-c8b0-4122-ab58-78229dc90d73\") " pod="calico-system/csi-node-driver-fcjzr" Aug 13 07:18:43.416230 kubelet[2512]: I0813 07:18:43.415168 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0ec0a1a1-c8b0-4122-ab58-78229dc90d73-socket-dir\") pod \"csi-node-driver-fcjzr\" (UID: \"0ec0a1a1-c8b0-4122-ab58-78229dc90d73\") " pod="calico-system/csi-node-driver-fcjzr" Aug 13 07:18:43.416230 kubelet[2512]: I0813 07:18:43.415238 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0ec0a1a1-c8b0-4122-ab58-78229dc90d73-varrun\") pod \"csi-node-driver-fcjzr\" (UID: \"0ec0a1a1-c8b0-4122-ab58-78229dc90d73\") " pod="calico-system/csi-node-driver-fcjzr" Aug 13 07:18:43.417795 kubelet[2512]: E0813 07:18:43.417776 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.417962 kubelet[2512]: W0813 07:18:43.417944 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.418081 kubelet[2512]: E0813 07:18:43.418032 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.425840 kubelet[2512]: E0813 07:18:43.424243 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.425840 kubelet[2512]: W0813 07:18:43.424272 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.425840 kubelet[2512]: E0813 07:18:43.424298 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.429279 kubelet[2512]: E0813 07:18:43.429221 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.429279 kubelet[2512]: W0813 07:18:43.429252 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.429279 kubelet[2512]: E0813 07:18:43.429277 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.474368 kubelet[2512]: E0813 07:18:43.474313 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:43.475071 containerd[1456]: time="2025-08-13T07:18:43.475027135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6569f49d87-pwffd,Uid:e5c36183-7da0-47a3-ab4c-2bb37905e6b1,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:43.509351 containerd[1456]: time="2025-08-13T07:18:43.508764831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:43.509351 containerd[1456]: time="2025-08-13T07:18:43.508950167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:43.509351 containerd[1456]: time="2025-08-13T07:18:43.508980626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:43.509351 containerd[1456]: time="2025-08-13T07:18:43.509115044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:43.516531 kubelet[2512]: E0813 07:18:43.516501 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.516531 kubelet[2512]: W0813 07:18:43.516526 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.516656 kubelet[2512]: E0813 07:18:43.516550 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.518066 kubelet[2512]: E0813 07:18:43.518032 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.518117 kubelet[2512]: W0813 07:18:43.518065 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.518117 kubelet[2512]: E0813 07:18:43.518097 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.519112 kubelet[2512]: E0813 07:18:43.519084 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.519112 kubelet[2512]: W0813 07:18:43.519099 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.519187 kubelet[2512]: E0813 07:18:43.519116 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.519926 kubelet[2512]: E0813 07:18:43.519903 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.519926 kubelet[2512]: W0813 07:18:43.519921 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.519993 kubelet[2512]: E0813 07:18:43.519933 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.520189 kubelet[2512]: E0813 07:18:43.520164 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.520189 kubelet[2512]: W0813 07:18:43.520177 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.520189 kubelet[2512]: E0813 07:18:43.520188 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.520508 kubelet[2512]: E0813 07:18:43.520481 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.520508 kubelet[2512]: W0813 07:18:43.520496 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.520508 kubelet[2512]: E0813 07:18:43.520509 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.520858 kubelet[2512]: E0813 07:18:43.520837 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.520858 kubelet[2512]: W0813 07:18:43.520854 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.520938 kubelet[2512]: E0813 07:18:43.520868 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.521141 kubelet[2512]: E0813 07:18:43.521116 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.521141 kubelet[2512]: W0813 07:18:43.521127 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.521141 kubelet[2512]: E0813 07:18:43.521136 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.521368 kubelet[2512]: E0813 07:18:43.521347 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.521368 kubelet[2512]: W0813 07:18:43.521358 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.521368 kubelet[2512]: E0813 07:18:43.521366 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.521626 kubelet[2512]: E0813 07:18:43.521611 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.521626 kubelet[2512]: W0813 07:18:43.521625 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.521626 kubelet[2512]: E0813 07:18:43.521634 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.521884 kubelet[2512]: E0813 07:18:43.521869 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.521884 kubelet[2512]: W0813 07:18:43.521881 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.522002 kubelet[2512]: E0813 07:18:43.521890 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.522144 kubelet[2512]: E0813 07:18:43.522125 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.522144 kubelet[2512]: W0813 07:18:43.522140 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.522192 kubelet[2512]: E0813 07:18:43.522150 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.522429 kubelet[2512]: E0813 07:18:43.522414 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.522429 kubelet[2512]: W0813 07:18:43.522425 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.522429 kubelet[2512]: E0813 07:18:43.522433 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.522654 kubelet[2512]: E0813 07:18:43.522639 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.522654 kubelet[2512]: W0813 07:18:43.522650 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.522717 kubelet[2512]: E0813 07:18:43.522660 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.523050 kubelet[2512]: E0813 07:18:43.523025 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.523050 kubelet[2512]: W0813 07:18:43.523045 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.523189 kubelet[2512]: E0813 07:18:43.523059 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.523355 kubelet[2512]: E0813 07:18:43.523337 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.523355 kubelet[2512]: W0813 07:18:43.523351 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.523425 kubelet[2512]: E0813 07:18:43.523363 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.523680 kubelet[2512]: E0813 07:18:43.523661 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.523680 kubelet[2512]: W0813 07:18:43.523675 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.523761 kubelet[2512]: E0813 07:18:43.523686 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.524206 kubelet[2512]: E0813 07:18:43.524169 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.524206 kubelet[2512]: W0813 07:18:43.524187 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.524206 kubelet[2512]: E0813 07:18:43.524198 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.524470 kubelet[2512]: E0813 07:18:43.524443 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.524470 kubelet[2512]: W0813 07:18:43.524460 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.524470 kubelet[2512]: E0813 07:18:43.524469 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.525986 kubelet[2512]: E0813 07:18:43.525966 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.525986 kubelet[2512]: W0813 07:18:43.525981 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.526054 kubelet[2512]: E0813 07:18:43.525996 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.526317 kubelet[2512]: E0813 07:18:43.526298 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.526317 kubelet[2512]: W0813 07:18:43.526312 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.526386 kubelet[2512]: E0813 07:18:43.526322 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.526579 kubelet[2512]: E0813 07:18:43.526561 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.526579 kubelet[2512]: W0813 07:18:43.526574 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.526635 kubelet[2512]: E0813 07:18:43.526584 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.526926 kubelet[2512]: E0813 07:18:43.526905 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.526926 kubelet[2512]: W0813 07:18:43.526919 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.527007 kubelet[2512]: E0813 07:18:43.526930 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.527218 kubelet[2512]: E0813 07:18:43.527199 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.527218 kubelet[2512]: W0813 07:18:43.527213 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.527291 kubelet[2512]: E0813 07:18:43.527223 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.527509 kubelet[2512]: E0813 07:18:43.527490 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.527509 kubelet[2512]: W0813 07:18:43.527504 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.527573 kubelet[2512]: E0813 07:18:43.527515 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.534429 kubelet[2512]: E0813 07:18:43.534307 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:43.534429 kubelet[2512]: W0813 07:18:43.534329 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:43.534429 kubelet[2512]: E0813 07:18:43.534347 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:43.549994 systemd[1]: Started cri-containerd-0fb28a6dadea2d60efbc09374a80fa1b7b40c23edeed22b63ac4bf2fcd5fe5be.scope - libcontainer container 0fb28a6dadea2d60efbc09374a80fa1b7b40c23edeed22b63ac4bf2fcd5fe5be. Aug 13 07:18:43.552064 containerd[1456]: time="2025-08-13T07:18:43.551758305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2hwbc,Uid:68ca8a7c-584e-4434-b8f2-3454f3ab773b,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:43.577358 containerd[1456]: time="2025-08-13T07:18:43.577210159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:43.577659 containerd[1456]: time="2025-08-13T07:18:43.577498905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:43.577659 containerd[1456]: time="2025-08-13T07:18:43.577540695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:43.578516 containerd[1456]: time="2025-08-13T07:18:43.578453239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:43.596833 containerd[1456]: time="2025-08-13T07:18:43.596762177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6569f49d87-pwffd,Uid:e5c36183-7da0-47a3-ab4c-2bb37905e6b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fb28a6dadea2d60efbc09374a80fa1b7b40c23edeed22b63ac4bf2fcd5fe5be\"" Aug 13 07:18:43.602214 systemd[1]: Started cri-containerd-789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6.scope - libcontainer container 789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6. Aug 13 07:18:43.606983 kubelet[2512]: E0813 07:18:43.606944 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:43.613234 containerd[1456]: time="2025-08-13T07:18:43.613185578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:18:43.638892 containerd[1456]: time="2025-08-13T07:18:43.638662330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2hwbc,Uid:68ca8a7c-584e-4434-b8f2-3454f3ab773b,Namespace:calico-system,Attempt:0,} returns sandbox id \"789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6\"" Aug 13 07:18:45.000659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501930474.mount: Deactivated successfully. Aug 13 07:18:45.380515 containerd[1456]: time="2025-08-13T07:18:45.380353425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:45.381323 containerd[1456]: time="2025-08-13T07:18:45.381235437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:18:45.382533 containerd[1456]: time="2025-08-13T07:18:45.382473142Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:45.385334 containerd[1456]: time="2025-08-13T07:18:45.385286701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:45.385911 containerd[1456]: time="2025-08-13T07:18:45.385858910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.772629899s" Aug 13 07:18:45.385911 containerd[1456]: time="2025-08-13T07:18:45.385889879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:18:45.386966 containerd[1456]: time="2025-08-13T07:18:45.386916970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:18:45.400920 kubelet[2512]: E0813 07:18:45.400729 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcjzr" podUID="0ec0a1a1-c8b0-4122-ab58-78229dc90d73" Aug 13 07:18:45.403148 containerd[1456]: time="2025-08-13T07:18:45.403110799Z" level=info msg="CreateContainer within sandbox \"0fb28a6dadea2d60efbc09374a80fa1b7b40c23edeed22b63ac4bf2fcd5fe5be\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:18:45.416102 containerd[1456]: time="2025-08-13T07:18:45.416043257Z" level=info msg="CreateContainer within sandbox \"0fb28a6dadea2d60efbc09374a80fa1b7b40c23edeed22b63ac4bf2fcd5fe5be\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fcf2ef22a87d490af93828ad2cae79e446810d3303b7ec014010daf95661a6f8\"" Aug 13 07:18:45.416667 containerd[1456]: time="2025-08-13T07:18:45.416637487Z" level=info msg="StartContainer for \"fcf2ef22a87d490af93828ad2cae79e446810d3303b7ec014010daf95661a6f8\"" Aug 13 07:18:45.460132 systemd[1]: Started cri-containerd-fcf2ef22a87d490af93828ad2cae79e446810d3303b7ec014010daf95661a6f8.scope - libcontainer container fcf2ef22a87d490af93828ad2cae79e446810d3303b7ec014010daf95661a6f8. Aug 13 07:18:45.504392 containerd[1456]: time="2025-08-13T07:18:45.504343194Z" level=info msg="StartContainer for \"fcf2ef22a87d490af93828ad2cae79e446810d3303b7ec014010daf95661a6f8\" returns successfully" Aug 13 07:18:46.463137 kubelet[2512]: E0813 07:18:46.463081 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:46.483277 kubelet[2512]: I0813 07:18:46.483082 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6569f49d87-pwffd" podStartSLOduration=2.706924618 podStartE2EDuration="4.483051648s" podCreationTimestamp="2025-08-13 07:18:42 +0000 UTC" firstStartedPulling="2025-08-13 07:18:43.610601609 +0000 UTC m=+19.295312088" lastFinishedPulling="2025-08-13 07:18:45.386728639 +0000 UTC m=+21.071439118" observedRunningTime="2025-08-13 07:18:46.482582558 +0000 UTC m=+22.167293037" watchObservedRunningTime="2025-08-13 07:18:46.483051648 +0000 UTC m=+22.167762127" Aug 13 07:18:46.521155 kubelet[2512]: E0813 07:18:46.521114 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.521155 kubelet[2512]: W0813 07:18:46.521139 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.521155 kubelet[2512]: E0813 07:18:46.521163 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.521462 kubelet[2512]: E0813 07:18:46.521434 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.521462 kubelet[2512]: W0813 07:18:46.521457 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.521521 kubelet[2512]: E0813 07:18:46.521467 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.521678 kubelet[2512]: E0813 07:18:46.521654 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.521678 kubelet[2512]: W0813 07:18:46.521666 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.521678 kubelet[2512]: E0813 07:18:46.521674 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.521956 kubelet[2512]: E0813 07:18:46.521939 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.521956 kubelet[2512]: W0813 07:18:46.521951 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.522028 kubelet[2512]: E0813 07:18:46.521960 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.522185 kubelet[2512]: E0813 07:18:46.522169 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.522185 kubelet[2512]: W0813 07:18:46.522180 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.522231 kubelet[2512]: E0813 07:18:46.522188 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.522372 kubelet[2512]: E0813 07:18:46.522358 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.522372 kubelet[2512]: W0813 07:18:46.522368 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.522427 kubelet[2512]: E0813 07:18:46.522376 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.522623 kubelet[2512]: E0813 07:18:46.522608 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.522623 kubelet[2512]: W0813 07:18:46.522619 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.522679 kubelet[2512]: E0813 07:18:46.522629 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.522866 kubelet[2512]: E0813 07:18:46.522851 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.522866 kubelet[2512]: W0813 07:18:46.522862 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.522931 kubelet[2512]: E0813 07:18:46.522871 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.523069 kubelet[2512]: E0813 07:18:46.523054 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.523069 kubelet[2512]: W0813 07:18:46.523064 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.523116 kubelet[2512]: E0813 07:18:46.523073 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.523259 kubelet[2512]: E0813 07:18:46.523245 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.523259 kubelet[2512]: W0813 07:18:46.523255 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.523319 kubelet[2512]: E0813 07:18:46.523262 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.523460 kubelet[2512]: E0813 07:18:46.523436 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.523460 kubelet[2512]: W0813 07:18:46.523456 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.523511 kubelet[2512]: E0813 07:18:46.523464 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.523666 kubelet[2512]: E0813 07:18:46.523651 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.523666 kubelet[2512]: W0813 07:18:46.523662 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.523720 kubelet[2512]: E0813 07:18:46.523671 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.523885 kubelet[2512]: E0813 07:18:46.523870 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.523885 kubelet[2512]: W0813 07:18:46.523881 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.523943 kubelet[2512]: E0813 07:18:46.523889 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.524089 kubelet[2512]: E0813 07:18:46.524075 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.524089 kubelet[2512]: W0813 07:18:46.524087 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.524254 kubelet[2512]: E0813 07:18:46.524095 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.524282 kubelet[2512]: E0813 07:18:46.524269 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.524282 kubelet[2512]: W0813 07:18:46.524276 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.524330 kubelet[2512]: E0813 07:18:46.524284 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.538727 kubelet[2512]: E0813 07:18:46.538695 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.538727 kubelet[2512]: W0813 07:18:46.538711 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.538727 kubelet[2512]: E0813 07:18:46.538724 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.539114 kubelet[2512]: E0813 07:18:46.539098 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.539114 kubelet[2512]: W0813 07:18:46.539109 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.539114 kubelet[2512]: E0813 07:18:46.539119 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.543267 kubelet[2512]: E0813 07:18:46.543224 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.543267 kubelet[2512]: W0813 07:18:46.543261 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.543459 kubelet[2512]: E0813 07:18:46.543291 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.543666 kubelet[2512]: E0813 07:18:46.543648 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.543666 kubelet[2512]: W0813 07:18:46.543662 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.543747 kubelet[2512]: E0813 07:18:46.543672 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.543983 kubelet[2512]: E0813 07:18:46.543966 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.543983 kubelet[2512]: W0813 07:18:46.543978 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.544072 kubelet[2512]: E0813 07:18:46.543988 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.544389 kubelet[2512]: E0813 07:18:46.544372 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.544389 kubelet[2512]: W0813 07:18:46.544384 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.544462 kubelet[2512]: E0813 07:18:46.544406 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.544778 kubelet[2512]: E0813 07:18:46.544755 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.544778 kubelet[2512]: W0813 07:18:46.544772 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.544899 kubelet[2512]: E0813 07:18:46.544787 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.545392 kubelet[2512]: E0813 07:18:46.545142 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.545392 kubelet[2512]: W0813 07:18:46.545156 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.545392 kubelet[2512]: E0813 07:18:46.545166 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.545876 kubelet[2512]: E0813 07:18:46.545854 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.545876 kubelet[2512]: W0813 07:18:46.545871 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.545973 kubelet[2512]: E0813 07:18:46.545885 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.546208 kubelet[2512]: E0813 07:18:46.546191 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.546208 kubelet[2512]: W0813 07:18:46.546206 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.546289 kubelet[2512]: E0813 07:18:46.546219 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.546496 kubelet[2512]: E0813 07:18:46.546481 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.546559 kubelet[2512]: W0813 07:18:46.546494 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.546559 kubelet[2512]: E0813 07:18:46.546506 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.546771 kubelet[2512]: E0813 07:18:46.546751 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.546771 kubelet[2512]: W0813 07:18:46.546766 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.546856 kubelet[2512]: E0813 07:18:46.546786 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.547119 kubelet[2512]: E0813 07:18:46.547102 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.547119 kubelet[2512]: W0813 07:18:46.547116 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.547205 kubelet[2512]: E0813 07:18:46.547126 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.547407 kubelet[2512]: E0813 07:18:46.547392 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.547439 kubelet[2512]: W0813 07:18:46.547405 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.547439 kubelet[2512]: E0813 07:18:46.547416 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.547774 kubelet[2512]: E0813 07:18:46.547759 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.547774 kubelet[2512]: W0813 07:18:46.547772 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.547855 kubelet[2512]: E0813 07:18:46.547784 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.548091 kubelet[2512]: E0813 07:18:46.548077 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.548114 kubelet[2512]: W0813 07:18:46.548092 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.548114 kubelet[2512]: E0813 07:18:46.548103 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.548482 kubelet[2512]: E0813 07:18:46.548459 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.548482 kubelet[2512]: W0813 07:18:46.548479 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.548542 kubelet[2512]: E0813 07:18:46.548496 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.548742 kubelet[2512]: E0813 07:18:46.548728 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:18:46.548742 kubelet[2512]: W0813 07:18:46.548738 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:18:46.548786 kubelet[2512]: E0813 07:18:46.548747 2512 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:18:46.665732 containerd[1456]: time="2025-08-13T07:18:46.665689940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:46.666473 containerd[1456]: time="2025-08-13T07:18:46.666423026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:18:46.667653 containerd[1456]: time="2025-08-13T07:18:46.667624951Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:46.669751 containerd[1456]: time="2025-08-13T07:18:46.669713184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:46.670307 containerd[1456]: time="2025-08-13T07:18:46.670277837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.283330278s" Aug 13 07:18:46.670350 containerd[1456]: time="2025-08-13T07:18:46.670305590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:18:46.684322 containerd[1456]: time="2025-08-13T07:18:46.684292228Z" level=info msg="CreateContainer within sandbox \"789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:18:46.700468 containerd[1456]: time="2025-08-13T07:18:46.700417055Z" level=info msg="CreateContainer within sandbox \"789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6\"" Aug 13 07:18:46.700970 containerd[1456]: time="2025-08-13T07:18:46.700945398Z" level=info msg="StartContainer for \"9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6\"" Aug 13 07:18:46.732958 systemd[1]: Started cri-containerd-9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6.scope - libcontainer container 9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6. Aug 13 07:18:46.767643 containerd[1456]: time="2025-08-13T07:18:46.767600820Z" level=info msg="StartContainer for \"9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6\" returns successfully" Aug 13 07:18:46.779655 systemd[1]: cri-containerd-9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6.scope: Deactivated successfully. Aug 13 07:18:47.190636 containerd[1456]: time="2025-08-13T07:18:47.190566605Z" level=info msg="shim disconnected" id=9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6 namespace=k8s.io Aug 13 07:18:47.190636 containerd[1456]: time="2025-08-13T07:18:47.190631169Z" level=warning msg="cleaning up after shim disconnected" id=9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6 namespace=k8s.io Aug 13 07:18:47.190636 containerd[1456]: time="2025-08-13T07:18:47.190643883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:47.394000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f3f8e630eba88d458d3cdeccc561f64dbf8a6bb5948ce9e75905e1011c795a6-rootfs.mount: Deactivated successfully. Aug 13 07:18:47.401382 kubelet[2512]: E0813 07:18:47.401326 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcjzr" podUID="0ec0a1a1-c8b0-4122-ab58-78229dc90d73" Aug 13 07:18:47.468339 kubelet[2512]: E0813 07:18:47.468209 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:47.469271 containerd[1456]: time="2025-08-13T07:18:47.469225979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:18:48.470240 kubelet[2512]: E0813 07:18:48.470196 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:49.399558 kubelet[2512]: E0813 07:18:49.399482 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcjzr" podUID="0ec0a1a1-c8b0-4122-ab58-78229dc90d73" Aug 13 07:18:50.657835 containerd[1456]: time="2025-08-13T07:18:50.657743836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:50.658970 containerd[1456]: time="2025-08-13T07:18:50.658900116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:18:50.660329 containerd[1456]: time="2025-08-13T07:18:50.660280724Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:50.663135 containerd[1456]: time="2025-08-13T07:18:50.663072841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:50.663846 containerd[1456]: time="2025-08-13T07:18:50.663786826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.194521533s" Aug 13 07:18:50.663911 containerd[1456]: time="2025-08-13T07:18:50.663849696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:18:50.669580 containerd[1456]: time="2025-08-13T07:18:50.669520595Z" level=info msg="CreateContainer within sandbox \"789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:18:50.687762 containerd[1456]: time="2025-08-13T07:18:50.687701907Z" level=info msg="CreateContainer within sandbox \"789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a\"" Aug 13 07:18:50.688408 containerd[1456]: time="2025-08-13T07:18:50.688374212Z" level=info msg="StartContainer for \"9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a\"" Aug 13 07:18:50.722032 systemd[1]: Started cri-containerd-9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a.scope - libcontainer container 9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a. Aug 13 07:18:50.756021 containerd[1456]: time="2025-08-13T07:18:50.755945156Z" level=info msg="StartContainer for \"9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a\" returns successfully" Aug 13 07:18:51.400191 kubelet[2512]: E0813 07:18:51.399990 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fcjzr" podUID="0ec0a1a1-c8b0-4122-ab58-78229dc90d73" Aug 13 07:18:52.009746 systemd[1]: cri-containerd-9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a.scope: Deactivated successfully. Aug 13 07:18:52.032678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a-rootfs.mount: Deactivated successfully. Aug 13 07:18:52.049149 kubelet[2512]: I0813 07:18:52.048485 2512 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:18:52.473740 containerd[1456]: time="2025-08-13T07:18:52.473643254Z" level=info msg="shim disconnected" id=9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a namespace=k8s.io Aug 13 07:18:52.473740 containerd[1456]: time="2025-08-13T07:18:52.473725600Z" level=warning msg="cleaning up after shim disconnected" id=9623663c2746c9e1ce1c106e00859320740b8122e597d1d1e5ddc6477f42739a namespace=k8s.io Aug 13 07:18:52.473740 containerd[1456]: time="2025-08-13T07:18:52.473741020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:52.495419 systemd[1]: Created slice kubepods-burstable-pod7664d1a0_e7f0_48d5_bd0d_61e02b72f59f.slice - libcontainer container kubepods-burstable-pod7664d1a0_e7f0_48d5_bd0d_61e02b72f59f.slice. Aug 13 07:18:52.504483 systemd[1]: Created slice kubepods-besteffort-pod2740fd78_4ba0_40d0_9638_65458c5f2e1e.slice - libcontainer container kubepods-besteffort-pod2740fd78_4ba0_40d0_9638_65458c5f2e1e.slice. Aug 13 07:18:52.516031 systemd[1]: Created slice kubepods-burstable-pod62b0e9a2_2b8a_410c_bf54_6c522a15fa93.slice - libcontainer container kubepods-burstable-pod62b0e9a2_2b8a_410c_bf54_6c522a15fa93.slice. Aug 13 07:18:52.528341 systemd[1]: Created slice kubepods-besteffort-podbab077f6_800e_450e_ac7f_4fa8a8599eca.slice - libcontainer container kubepods-besteffort-podbab077f6_800e_450e_ac7f_4fa8a8599eca.slice. Aug 13 07:18:52.535419 systemd[1]: Created slice kubepods-besteffort-podc1d1c5ee_dd0d_4857_8db1_ad1baffd1d4b.slice - libcontainer container kubepods-besteffort-podc1d1c5ee_dd0d_4857_8db1_ad1baffd1d4b.slice. Aug 13 07:18:52.541130 systemd[1]: Created slice kubepods-besteffort-podb55bac42_942a_48b6_84f6_be639523c7be.slice - libcontainer container kubepods-besteffort-podb55bac42_942a_48b6_84f6_be639523c7be.slice. Aug 13 07:18:52.548152 systemd[1]: Created slice kubepods-besteffort-podab627325_2749_42aa_91f9_75c79fd24e77.slice - libcontainer container kubepods-besteffort-podab627325_2749_42aa_91f9_75c79fd24e77.slice. Aug 13 07:18:52.584515 kubelet[2512]: I0813 07:18:52.584434 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-lknln\" (UID: \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\") " pod="calico-system/goldmane-768f4c5c69-lknln" Aug 13 07:18:52.584515 kubelet[2512]: I0813 07:18:52.584505 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b-goldmane-key-pair\") pod \"goldmane-768f4c5c69-lknln\" (UID: \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\") " pod="calico-system/goldmane-768f4c5c69-lknln" Aug 13 07:18:52.585059 kubelet[2512]: I0813 07:18:52.584537 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7664d1a0-e7f0-48d5-bd0d-61e02b72f59f-config-volume\") pod \"coredns-674b8bbfcf-xx8kw\" (UID: \"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f\") " pod="kube-system/coredns-674b8bbfcf-xx8kw" Aug 13 07:18:52.585059 kubelet[2512]: I0813 07:18:52.584593 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-ca-bundle\") pod \"whisker-745cfdf7c7-mzblt\" (UID: \"ab627325-2749-42aa-91f9-75c79fd24e77\") " pod="calico-system/whisker-745cfdf7c7-mzblt" Aug 13 07:18:52.585059 kubelet[2512]: I0813 07:18:52.584618 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfsd\" (UniqueName: \"kubernetes.io/projected/ab627325-2749-42aa-91f9-75c79fd24e77-kube-api-access-hpfsd\") pod \"whisker-745cfdf7c7-mzblt\" (UID: \"ab627325-2749-42aa-91f9-75c79fd24e77\") " pod="calico-system/whisker-745cfdf7c7-mzblt" Aug 13 07:18:52.585059 kubelet[2512]: I0813 07:18:52.584672 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9r5p\" (UniqueName: \"kubernetes.io/projected/7664d1a0-e7f0-48d5-bd0d-61e02b72f59f-kube-api-access-v9r5p\") pod \"coredns-674b8bbfcf-xx8kw\" (UID: \"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f\") " pod="kube-system/coredns-674b8bbfcf-xx8kw" Aug 13 07:18:52.585059 kubelet[2512]: I0813 07:18:52.584734 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2740fd78-4ba0-40d0-9638-65458c5f2e1e-calico-apiserver-certs\") pod \"calico-apiserver-655dd967b8-nrt5s\" (UID: \"2740fd78-4ba0-40d0-9638-65458c5f2e1e\") " pod="calico-apiserver/calico-apiserver-655dd967b8-nrt5s" Aug 13 07:18:52.585212 kubelet[2512]: I0813 07:18:52.584751 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5w5\" (UniqueName: \"kubernetes.io/projected/2740fd78-4ba0-40d0-9638-65458c5f2e1e-kube-api-access-9t5w5\") pod \"calico-apiserver-655dd967b8-nrt5s\" (UID: \"2740fd78-4ba0-40d0-9638-65458c5f2e1e\") " pod="calico-apiserver/calico-apiserver-655dd967b8-nrt5s" Aug 13 07:18:52.585212 kubelet[2512]: I0813 07:18:52.584771 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tww9c\" (UniqueName: \"kubernetes.io/projected/62b0e9a2-2b8a-410c-bf54-6c522a15fa93-kube-api-access-tww9c\") pod \"coredns-674b8bbfcf-sx5l7\" (UID: \"62b0e9a2-2b8a-410c-bf54-6c522a15fa93\") " pod="kube-system/coredns-674b8bbfcf-sx5l7" Aug 13 07:18:52.585212 kubelet[2512]: I0813 07:18:52.584790 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b55bac42-942a-48b6-84f6-be639523c7be-tigera-ca-bundle\") pod \"calico-kube-controllers-6bc56dc789-lw45n\" (UID: \"b55bac42-942a-48b6-84f6-be639523c7be\") " pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" Aug 13 07:18:52.585212 kubelet[2512]: I0813 07:18:52.584806 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjcn\" (UniqueName: \"kubernetes.io/projected/c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b-kube-api-access-pcjcn\") pod \"goldmane-768f4c5c69-lknln\" (UID: \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\") " pod="calico-system/goldmane-768f4c5c69-lknln" Aug 13 07:18:52.585212 kubelet[2512]: I0813 07:18:52.584845 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-backend-key-pair\") pod \"whisker-745cfdf7c7-mzblt\" (UID: \"ab627325-2749-42aa-91f9-75c79fd24e77\") " pod="calico-system/whisker-745cfdf7c7-mzblt" Aug 13 07:18:52.585341 kubelet[2512]: I0813 07:18:52.584862 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqx2\" (UniqueName: \"kubernetes.io/projected/bab077f6-800e-450e-ac7f-4fa8a8599eca-kube-api-access-hcqx2\") pod \"calico-apiserver-655dd967b8-5xw68\" (UID: \"bab077f6-800e-450e-ac7f-4fa8a8599eca\") " pod="calico-apiserver/calico-apiserver-655dd967b8-5xw68" Aug 13 07:18:52.585341 kubelet[2512]: I0813 07:18:52.584881 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62b0e9a2-2b8a-410c-bf54-6c522a15fa93-config-volume\") pod \"coredns-674b8bbfcf-sx5l7\" (UID: \"62b0e9a2-2b8a-410c-bf54-6c522a15fa93\") " pod="kube-system/coredns-674b8bbfcf-sx5l7" Aug 13 07:18:52.585341 kubelet[2512]: I0813 07:18:52.584897 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhqt\" (UniqueName: \"kubernetes.io/projected/b55bac42-942a-48b6-84f6-be639523c7be-kube-api-access-vhhqt\") pod \"calico-kube-controllers-6bc56dc789-lw45n\" (UID: \"b55bac42-942a-48b6-84f6-be639523c7be\") " pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" Aug 13 07:18:52.585341 kubelet[2512]: I0813 07:18:52.584926 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b-config\") pod \"goldmane-768f4c5c69-lknln\" (UID: \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\") " pod="calico-system/goldmane-768f4c5c69-lknln" Aug 13 07:18:52.585341 kubelet[2512]: I0813 07:18:52.584941 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bab077f6-800e-450e-ac7f-4fa8a8599eca-calico-apiserver-certs\") pod \"calico-apiserver-655dd967b8-5xw68\" (UID: \"bab077f6-800e-450e-ac7f-4fa8a8599eca\") " pod="calico-apiserver/calico-apiserver-655dd967b8-5xw68" Aug 13 07:18:52.798464 kubelet[2512]: E0813 07:18:52.798324 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:52.799064 containerd[1456]: time="2025-08-13T07:18:52.799015827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xx8kw,Uid:7664d1a0-e7f0-48d5-bd0d-61e02b72f59f,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:52.807902 containerd[1456]: time="2025-08-13T07:18:52.807852767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-nrt5s,Uid:2740fd78-4ba0-40d0-9638-65458c5f2e1e,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:18:52.821898 kubelet[2512]: E0813 07:18:52.821273 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:52.822521 containerd[1456]: time="2025-08-13T07:18:52.822467355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sx5l7,Uid:62b0e9a2-2b8a-410c-bf54-6c522a15fa93,Namespace:kube-system,Attempt:0,}" Aug 13 07:18:52.834315 containerd[1456]: time="2025-08-13T07:18:52.834216703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-5xw68,Uid:bab077f6-800e-450e-ac7f-4fa8a8599eca,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:18:52.840717 containerd[1456]: time="2025-08-13T07:18:52.840667748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lknln,Uid:c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:52.847189 containerd[1456]: time="2025-08-13T07:18:52.847158077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc56dc789-lw45n,Uid:b55bac42-942a-48b6-84f6-be639523c7be,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:52.853461 containerd[1456]: time="2025-08-13T07:18:52.852592892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745cfdf7c7-mzblt,Uid:ab627325-2749-42aa-91f9-75c79fd24e77,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:53.024976 containerd[1456]: time="2025-08-13T07:18:53.024894888Z" level=error msg="Failed to destroy network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.030565 containerd[1456]: time="2025-08-13T07:18:53.030503798Z" level=error msg="encountered an error cleaning up failed sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.033656 containerd[1456]: time="2025-08-13T07:18:53.033610667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-nrt5s,Uid:2740fd78-4ba0-40d0-9638-65458c5f2e1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.040840 kubelet[2512]: E0813 07:18:53.035969 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.040840 kubelet[2512]: E0813 07:18:53.036049 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655dd967b8-nrt5s" Aug 13 07:18:53.040840 kubelet[2512]: E0813 07:18:53.036079 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655dd967b8-nrt5s" Aug 13 07:18:53.040974 kubelet[2512]: E0813 07:18:53.036133 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655dd967b8-nrt5s_calico-apiserver(2740fd78-4ba0-40d0-9638-65458c5f2e1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655dd967b8-nrt5s_calico-apiserver(2740fd78-4ba0-40d0-9638-65458c5f2e1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655dd967b8-nrt5s" podUID="2740fd78-4ba0-40d0-9638-65458c5f2e1e" Aug 13 07:18:53.043565 containerd[1456]: time="2025-08-13T07:18:53.041928605Z" level=error msg="Failed to destroy network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.044794 containerd[1456]: time="2025-08-13T07:18:53.044747173Z" level=error msg="encountered an error cleaning up failed sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.044859 containerd[1456]: time="2025-08-13T07:18:53.044838858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sx5l7,Uid:62b0e9a2-2b8a-410c-bf54-6c522a15fa93,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.045418 kubelet[2512]: E0813 07:18:53.045303 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.045418 kubelet[2512]: E0813 07:18:53.045349 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sx5l7" Aug 13 07:18:53.045418 kubelet[2512]: E0813 07:18:53.045372 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sx5l7" Aug 13 07:18:53.047831 kubelet[2512]: E0813 07:18:53.045916 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sx5l7_kube-system(62b0e9a2-2b8a-410c-bf54-6c522a15fa93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sx5l7_kube-system(62b0e9a2-2b8a-410c-bf54-6c522a15fa93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sx5l7" podUID="62b0e9a2-2b8a-410c-bf54-6c522a15fa93" Aug 13 07:18:53.054645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9-shm.mount: Deactivated successfully. Aug 13 07:18:53.060832 containerd[1456]: time="2025-08-13T07:18:53.060773287Z" level=error msg="Failed to destroy network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.063085 containerd[1456]: time="2025-08-13T07:18:53.061202206Z" level=error msg="encountered an error cleaning up failed sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.063085 containerd[1456]: time="2025-08-13T07:18:53.061530792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xx8kw,Uid:7664d1a0-e7f0-48d5-bd0d-61e02b72f59f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.063012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b-shm.mount: Deactivated successfully. Aug 13 07:18:53.063214 kubelet[2512]: E0813 07:18:53.061804 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.063214 kubelet[2512]: E0813 07:18:53.061890 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xx8kw" Aug 13 07:18:53.063214 kubelet[2512]: E0813 07:18:53.061914 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xx8kw" Aug 13 07:18:53.063306 kubelet[2512]: E0813 07:18:53.061972 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xx8kw_kube-system(7664d1a0-e7f0-48d5-bd0d-61e02b72f59f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xx8kw_kube-system(7664d1a0-e7f0-48d5-bd0d-61e02b72f59f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xx8kw" podUID="7664d1a0-e7f0-48d5-bd0d-61e02b72f59f" Aug 13 07:18:53.088851 containerd[1456]: time="2025-08-13T07:18:53.087315094Z" level=error msg="Failed to destroy network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.088851 containerd[1456]: time="2025-08-13T07:18:53.087963832Z" level=error msg="encountered an error cleaning up failed sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.088851 containerd[1456]: time="2025-08-13T07:18:53.088033685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-5xw68,Uid:bab077f6-800e-450e-ac7f-4fa8a8599eca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.089144 kubelet[2512]: E0813 07:18:53.088480 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.089144 kubelet[2512]: E0813 07:18:53.088557 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655dd967b8-5xw68" Aug 13 07:18:53.089144 kubelet[2512]: E0813 07:18:53.088584 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655dd967b8-5xw68" Aug 13 07:18:53.089255 kubelet[2512]: E0813 07:18:53.088638 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655dd967b8-5xw68_calico-apiserver(bab077f6-800e-450e-ac7f-4fa8a8599eca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655dd967b8-5xw68_calico-apiserver(bab077f6-800e-450e-ac7f-4fa8a8599eca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655dd967b8-5xw68" podUID="bab077f6-800e-450e-ac7f-4fa8a8599eca" Aug 13 07:18:53.091564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec-shm.mount: Deactivated successfully. Aug 13 07:18:53.101832 containerd[1456]: time="2025-08-13T07:18:53.099462168Z" level=error msg="Failed to destroy network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.101832 containerd[1456]: time="2025-08-13T07:18:53.101197479Z" level=error msg="encountered an error cleaning up failed sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.101832 containerd[1456]: time="2025-08-13T07:18:53.101246183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lknln,Uid:c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.103024 kubelet[2512]: E0813 07:18:53.102983 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.103083 kubelet[2512]: E0813 07:18:53.103035 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lknln" Aug 13 07:18:53.103083 kubelet[2512]: E0813 07:18:53.103056 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lknln" Aug 13 07:18:53.103025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024-shm.mount: Deactivated successfully. Aug 13 07:18:53.103200 kubelet[2512]: E0813 07:18:53.103095 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-lknln_calico-system(c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-lknln_calico-system(c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lknln" podUID="c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b" Aug 13 07:18:53.119248 containerd[1456]: time="2025-08-13T07:18:53.119180678Z" level=error msg="Failed to destroy network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.119755 containerd[1456]: time="2025-08-13T07:18:53.119720238Z" level=error msg="encountered an error cleaning up failed sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.119843 containerd[1456]: time="2025-08-13T07:18:53.119800611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc56dc789-lw45n,Uid:b55bac42-942a-48b6-84f6-be639523c7be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.120196 kubelet[2512]: E0813 07:18:53.120137 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.120307 kubelet[2512]: E0813 07:18:53.120226 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" Aug 13 07:18:53.120307 kubelet[2512]: E0813 07:18:53.120252 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" Aug 13 07:18:53.120389 kubelet[2512]: E0813 07:18:53.120322 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bc56dc789-lw45n_calico-system(b55bac42-942a-48b6-84f6-be639523c7be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bc56dc789-lw45n_calico-system(b55bac42-942a-48b6-84f6-be639523c7be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" podUID="b55bac42-942a-48b6-84f6-be639523c7be" Aug 13 07:18:53.128312 containerd[1456]: time="2025-08-13T07:18:53.128240803Z" level=error msg="Failed to destroy network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.128720 containerd[1456]: time="2025-08-13T07:18:53.128685471Z" level=error msg="encountered an error cleaning up failed sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.128766 containerd[1456]: time="2025-08-13T07:18:53.128740246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745cfdf7c7-mzblt,Uid:ab627325-2749-42aa-91f9-75c79fd24e77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.129064 kubelet[2512]: E0813 07:18:53.129016 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.129112 kubelet[2512]: E0813 07:18:53.129090 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745cfdf7c7-mzblt" Aug 13 07:18:53.129157 kubelet[2512]: E0813 07:18:53.129116 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745cfdf7c7-mzblt" Aug 13 07:18:53.129218 kubelet[2512]: E0813 07:18:53.129180 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-745cfdf7c7-mzblt_calico-system(ab627325-2749-42aa-91f9-75c79fd24e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-745cfdf7c7-mzblt_calico-system(ab627325-2749-42aa-91f9-75c79fd24e77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-745cfdf7c7-mzblt" podUID="ab627325-2749-42aa-91f9-75c79fd24e77" Aug 13 07:18:53.405159 systemd[1]: Created slice kubepods-besteffort-pod0ec0a1a1_c8b0_4122_ab58_78229dc90d73.slice - libcontainer container kubepods-besteffort-pod0ec0a1a1_c8b0_4122_ab58_78229dc90d73.slice. Aug 13 07:18:53.407909 containerd[1456]: time="2025-08-13T07:18:53.407863663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcjzr,Uid:0ec0a1a1-c8b0-4122-ab58-78229dc90d73,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:53.467370 containerd[1456]: time="2025-08-13T07:18:53.467308542Z" level=error msg="Failed to destroy network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.467785 containerd[1456]: time="2025-08-13T07:18:53.467749464Z" level=error msg="encountered an error cleaning up failed sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.467852 containerd[1456]: time="2025-08-13T07:18:53.467804479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcjzr,Uid:0ec0a1a1-c8b0-4122-ab58-78229dc90d73,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.468142 kubelet[2512]: E0813 07:18:53.468096 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.468213 kubelet[2512]: E0813 07:18:53.468170 2512 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fcjzr" Aug 13 07:18:53.468213 kubelet[2512]: E0813 07:18:53.468194 2512 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fcjzr" Aug 13 07:18:53.468283 kubelet[2512]: E0813 07:18:53.468255 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fcjzr_calico-system(0ec0a1a1-c8b0-4122-ab58-78229dc90d73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fcjzr_calico-system(0ec0a1a1-c8b0-4122-ab58-78229dc90d73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fcjzr" podUID="0ec0a1a1-c8b0-4122-ab58-78229dc90d73" Aug 13 07:18:53.486694 kubelet[2512]: I0813 07:18:53.486636 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:18:53.487583 containerd[1456]: time="2025-08-13T07:18:53.487510414Z" level=info msg="StopPodSandbox for \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\"" Aug 13 07:18:53.488707 kubelet[2512]: I0813 07:18:53.487883 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:18:53.488873 kubelet[2512]: I0813 07:18:53.488834 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:18:53.489124 containerd[1456]: time="2025-08-13T07:18:53.488879846Z" level=info msg="Ensure that sandbox 59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1 in task-service has been cleanup successfully" Aug 13 07:18:53.490550 containerd[1456]: time="2025-08-13T07:18:53.489448141Z" level=info msg="StopPodSandbox for \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\"" Aug 13 07:18:53.490550 containerd[1456]: time="2025-08-13T07:18:53.489632362Z" level=info msg="Ensure that sandbox e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b in task-service has been cleanup successfully" Aug 13 07:18:53.490550 containerd[1456]: time="2025-08-13T07:18:53.489654334Z" level=info msg="StopPodSandbox for \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\"" Aug 13 07:18:53.490550 containerd[1456]: time="2025-08-13T07:18:53.489867611Z" level=info msg="Ensure that sandbox eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda in task-service has been cleanup successfully" Aug 13 07:18:53.492145 kubelet[2512]: I0813 07:18:53.492053 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:18:53.492654 containerd[1456]: time="2025-08-13T07:18:53.492609512Z" level=info msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\"" Aug 13 07:18:53.492783 containerd[1456]: time="2025-08-13T07:18:53.492755000Z" level=info msg="Ensure that sandbox 37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71 in task-service has been cleanup successfully" Aug 13 07:18:53.495022 kubelet[2512]: I0813 07:18:53.494987 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:18:53.495565 containerd[1456]: time="2025-08-13T07:18:53.495477965Z" level=info msg="StopPodSandbox for \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\"" Aug 13 07:18:53.495887 containerd[1456]: time="2025-08-13T07:18:53.495639713Z" level=info msg="Ensure that sandbox aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296 in task-service has been cleanup successfully" Aug 13 07:18:53.500420 kubelet[2512]: I0813 07:18:53.500105 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:18:53.502046 containerd[1456]: time="2025-08-13T07:18:53.502010157Z" level=info msg="StopPodSandbox for \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\"" Aug 13 07:18:53.502808 containerd[1456]: time="2025-08-13T07:18:53.502215118Z" level=info msg="Ensure that sandbox 888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9 in task-service has been cleanup successfully" Aug 13 07:18:53.504375 kubelet[2512]: I0813 07:18:53.504355 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:18:53.505438 containerd[1456]: time="2025-08-13T07:18:53.505415093Z" level=info msg="StopPodSandbox for \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\"" Aug 13 07:18:53.506393 containerd[1456]: time="2025-08-13T07:18:53.505692712Z" level=info msg="Ensure that sandbox 31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec in task-service has been cleanup successfully" Aug 13 07:18:53.514595 containerd[1456]: time="2025-08-13T07:18:53.514555271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:18:53.518382 kubelet[2512]: I0813 07:18:53.518355 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:18:53.519491 containerd[1456]: time="2025-08-13T07:18:53.519094611Z" level=info msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\"" Aug 13 07:18:53.519491 containerd[1456]: time="2025-08-13T07:18:53.519319189Z" level=info msg="Ensure that sandbox 7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024 in task-service has been cleanup successfully" Aug 13 07:18:53.576777 containerd[1456]: time="2025-08-13T07:18:53.576708558Z" level=error msg="StopPodSandbox for \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\" failed" error="failed to destroy network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.577072 kubelet[2512]: E0813 07:18:53.577025 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:18:53.577197 kubelet[2512]: E0813 07:18:53.577097 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296"} Aug 13 07:18:53.577197 kubelet[2512]: E0813 07:18:53.577161 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ec0a1a1-c8b0-4122-ab58-78229dc90d73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.577197 kubelet[2512]: E0813 07:18:53.577187 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ec0a1a1-c8b0-4122-ab58-78229dc90d73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fcjzr" podUID="0ec0a1a1-c8b0-4122-ab58-78229dc90d73" Aug 13 07:18:53.577331 containerd[1456]: time="2025-08-13T07:18:53.577102951Z" level=error msg="StopPodSandbox for \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\" failed" error="failed to destroy network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.577368 kubelet[2512]: E0813 07:18:53.577306 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:18:53.577368 kubelet[2512]: E0813 07:18:53.577340 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1"} Aug 13 07:18:53.577368 kubelet[2512]: E0813 07:18:53.577362 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2740fd78-4ba0-40d0-9638-65458c5f2e1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.577464 kubelet[2512]: E0813 07:18:53.577380 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2740fd78-4ba0-40d0-9638-65458c5f2e1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655dd967b8-nrt5s" podUID="2740fd78-4ba0-40d0-9638-65458c5f2e1e" Aug 13 07:18:53.578362 containerd[1456]: time="2025-08-13T07:18:53.578027985Z" level=error msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" failed" error="failed to destroy network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.578414 kubelet[2512]: E0813 07:18:53.578159 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:18:53.578414 kubelet[2512]: E0813 07:18:53.578217 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71"} Aug 13 07:18:53.578414 kubelet[2512]: E0813 07:18:53.578239 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b55bac42-942a-48b6-84f6-be639523c7be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.578414 kubelet[2512]: E0813 07:18:53.578257 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b55bac42-942a-48b6-84f6-be639523c7be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" podUID="b55bac42-942a-48b6-84f6-be639523c7be" Aug 13 07:18:53.582207 containerd[1456]: time="2025-08-13T07:18:53.582158155Z" level=error msg="StopPodSandbox for \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\" failed" error="failed to destroy network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.583006 kubelet[2512]: E0813 07:18:53.582962 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:18:53.583208 kubelet[2512]: E0813 07:18:53.583184 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9"} Aug 13 07:18:53.583539 kubelet[2512]: E0813 07:18:53.583460 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"62b0e9a2-2b8a-410c-bf54-6c522a15fa93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.583539 kubelet[2512]: E0813 07:18:53.583493 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"62b0e9a2-2b8a-410c-bf54-6c522a15fa93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sx5l7" podUID="62b0e9a2-2b8a-410c-bf54-6c522a15fa93" Aug 13 07:18:53.588158 containerd[1456]: time="2025-08-13T07:18:53.587862758Z" level=error msg="StopPodSandbox for \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\" failed" error="failed to destroy network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.590185 containerd[1456]: time="2025-08-13T07:18:53.588315101Z" level=error msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" failed" error="failed to destroy network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.590333 kubelet[2512]: E0813 07:18:53.590173 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:18:53.590333 kubelet[2512]: E0813 07:18:53.590222 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b"} Aug 13 07:18:53.590333 kubelet[2512]: E0813 07:18:53.590257 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.590333 kubelet[2512]: E0813 07:18:53.590279 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xx8kw" podUID="7664d1a0-e7f0-48d5-bd0d-61e02b72f59f" Aug 13 07:18:53.590860 containerd[1456]: time="2025-08-13T07:18:53.590770716Z" level=error msg="StopPodSandbox for \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\" failed" error="failed to destroy network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.590969 kubelet[2512]: E0813 07:18:53.590167 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:18:53.591132 kubelet[2512]: E0813 07:18:53.590979 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024"} Aug 13 07:18:53.591132 kubelet[2512]: E0813 07:18:53.590975 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:18:53.591132 kubelet[2512]: E0813 07:18:53.591013 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda"} Aug 13 07:18:53.591132 kubelet[2512]: E0813 07:18:53.591023 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.591132 kubelet[2512]: E0813 07:18:53.591037 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab627325-2749-42aa-91f9-75c79fd24e77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.591332 kubelet[2512]: E0813 07:18:53.591049 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lknln" podUID="c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b" Aug 13 07:18:53.591332 kubelet[2512]: E0813 07:18:53.591056 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab627325-2749-42aa-91f9-75c79fd24e77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-745cfdf7c7-mzblt" podUID="ab627325-2749-42aa-91f9-75c79fd24e77" Aug 13 07:18:53.599620 containerd[1456]: time="2025-08-13T07:18:53.599568210Z" level=error msg="StopPodSandbox for \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\" failed" error="failed to destroy network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:53.599894 kubelet[2512]: E0813 07:18:53.599833 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:18:53.599940 kubelet[2512]: E0813 07:18:53.599894 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec"} Aug 13 07:18:53.599940 kubelet[2512]: E0813 07:18:53.599925 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bab077f6-800e-450e-ac7f-4fa8a8599eca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:53.600017 kubelet[2512]: E0813 07:18:53.599950 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bab077f6-800e-450e-ac7f-4fa8a8599eca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655dd967b8-5xw68" podUID="bab077f6-800e-450e-ac7f-4fa8a8599eca" Aug 13 07:18:54.034734 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71-shm.mount: Deactivated successfully. Aug 13 07:18:54.034863 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda-shm.mount: Deactivated successfully. Aug 13 07:19:03.336373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853545227.mount: Deactivated successfully. Aug 13 07:19:04.400881 containerd[1456]: time="2025-08-13T07:19:04.400687373Z" level=info msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\"" Aug 13 07:19:04.400881 containerd[1456]: time="2025-08-13T07:19:04.400755442Z" level=info msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\"" Aug 13 07:19:04.462359 containerd[1456]: time="2025-08-13T07:19:04.462176361Z" level=error msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" failed" error="failed to destroy network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:04.462531 kubelet[2512]: E0813 07:19:04.462396 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:04.462531 kubelet[2512]: E0813 07:19:04.462455 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024"} Aug 13 07:19:04.462531 kubelet[2512]: E0813 07:19:04.462505 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:04.463076 kubelet[2512]: E0813 07:19:04.462535 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lknln" podUID="c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b" Aug 13 07:19:04.464244 containerd[1456]: time="2025-08-13T07:19:04.464192399Z" level=error msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" failed" error="failed to destroy network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:04.464466 kubelet[2512]: E0813 07:19:04.464424 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:04.464466 kubelet[2512]: E0813 07:19:04.464463 2512 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71"} Aug 13 07:19:04.464574 kubelet[2512]: E0813 07:19:04.464504 2512 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b55bac42-942a-48b6-84f6-be639523c7be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:04.464574 kubelet[2512]: E0813 07:19:04.464531 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b55bac42-942a-48b6-84f6-be639523c7be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" podUID="b55bac42-942a-48b6-84f6-be639523c7be" Aug 13 07:19:04.516403 containerd[1456]: time="2025-08-13T07:19:04.516334675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:04.517215 containerd[1456]: time="2025-08-13T07:19:04.517166534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:19:04.518808 containerd[1456]: time="2025-08-13T07:19:04.518764219Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:04.533218 containerd[1456]: time="2025-08-13T07:19:04.533115657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:04.533671 containerd[1456]: time="2025-08-13T07:19:04.533617099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.018836218s" Aug 13 07:19:04.533671 containerd[1456]: time="2025-08-13T07:19:04.533664770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:19:04.550616 containerd[1456]: time="2025-08-13T07:19:04.550551383Z" level=info msg="CreateContainer within sandbox \"789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:19:04.572345 containerd[1456]: time="2025-08-13T07:19:04.572290854Z" level=info msg="CreateContainer within sandbox \"789cb6bdbe8ccf7149ff8a26715f6f1919df0ec051bd12bb15d684cfdd0f30d6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7ddea1a0b005a35b287e537906edb896e15e0f20f9eea53108fa796ade398e17\"" Aug 13 07:19:04.573030 containerd[1456]: time="2025-08-13T07:19:04.572982678Z" level=info msg="StartContainer for \"7ddea1a0b005a35b287e537906edb896e15e0f20f9eea53108fa796ade398e17\"" Aug 13 07:19:04.635002 systemd[1]: Started cri-containerd-7ddea1a0b005a35b287e537906edb896e15e0f20f9eea53108fa796ade398e17.scope - libcontainer container 7ddea1a0b005a35b287e537906edb896e15e0f20f9eea53108fa796ade398e17. Aug 13 07:19:04.684298 containerd[1456]: time="2025-08-13T07:19:04.684210365Z" level=info msg="StartContainer for \"7ddea1a0b005a35b287e537906edb896e15e0f20f9eea53108fa796ade398e17\" returns successfully" Aug 13 07:19:04.771427 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:19:04.771614 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:19:05.020587 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:41976.service - OpenSSH per-connection server daemon (10.0.0.1:41976). Aug 13 07:19:05.080968 containerd[1456]: time="2025-08-13T07:19:05.079126780Z" level=info msg="StopPodSandbox for \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\"" Aug 13 07:19:05.087787 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 41976 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:05.090480 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:05.102413 systemd-logind[1436]: New session 8 of user core. Aug 13 07:19:05.111029 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.156 [INFO][3825] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.157 [INFO][3825] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" iface="eth0" netns="/var/run/netns/cni-2d65beb2-472c-0c29-b289-55b743933a70" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.157 [INFO][3825] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" iface="eth0" netns="/var/run/netns/cni-2d65beb2-472c-0c29-b289-55b743933a70" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.157 [INFO][3825] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" iface="eth0" netns="/var/run/netns/cni-2d65beb2-472c-0c29-b289-55b743933a70" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.157 [INFO][3825] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.157 [INFO][3825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.240 [INFO][3837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.241 [INFO][3837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.241 [INFO][3837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.249 [WARNING][3837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.249 [INFO][3837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.251 [INFO][3837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:05.259119 containerd[1456]: 2025-08-13 07:19:05.256 [INFO][3825] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:05.262340 containerd[1456]: time="2025-08-13T07:19:05.261781096Z" level=info msg="TearDown network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\" successfully" Aug 13 07:19:05.262340 containerd[1456]: time="2025-08-13T07:19:05.261840630Z" level=info msg="StopPodSandbox for \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\" returns successfully" Aug 13 07:19:05.287451 sshd[3806]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:05.292552 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:41976.service: Deactivated successfully. Aug 13 07:19:05.295030 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:19:05.295744 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:19:05.296792 systemd-logind[1436]: Removed session 8. Aug 13 07:19:05.372919 kubelet[2512]: I0813 07:19:05.372805 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-ca-bundle\") pod \"ab627325-2749-42aa-91f9-75c79fd24e77\" (UID: \"ab627325-2749-42aa-91f9-75c79fd24e77\") " Aug 13 07:19:05.373068 kubelet[2512]: I0813 07:19:05.372928 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfsd\" (UniqueName: \"kubernetes.io/projected/ab627325-2749-42aa-91f9-75c79fd24e77-kube-api-access-hpfsd\") pod \"ab627325-2749-42aa-91f9-75c79fd24e77\" (UID: \"ab627325-2749-42aa-91f9-75c79fd24e77\") " Aug 13 07:19:05.373068 kubelet[2512]: I0813 07:19:05.372959 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-backend-key-pair\") pod \"ab627325-2749-42aa-91f9-75c79fd24e77\" (UID: \"ab627325-2749-42aa-91f9-75c79fd24e77\") " Aug 13 07:19:05.373612 kubelet[2512]: I0813 07:19:05.373531 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ab627325-2749-42aa-91f9-75c79fd24e77" (UID: "ab627325-2749-42aa-91f9-75c79fd24e77"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:19:05.377058 kubelet[2512]: I0813 07:19:05.377018 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab627325-2749-42aa-91f9-75c79fd24e77-kube-api-access-hpfsd" (OuterVolumeSpecName: "kube-api-access-hpfsd") pod "ab627325-2749-42aa-91f9-75c79fd24e77" (UID: "ab627325-2749-42aa-91f9-75c79fd24e77"). InnerVolumeSpecName "kube-api-access-hpfsd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:19:05.377720 kubelet[2512]: I0813 07:19:05.377688 2512 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ab627325-2749-42aa-91f9-75c79fd24e77" (UID: "ab627325-2749-42aa-91f9-75c79fd24e77"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:19:05.399990 containerd[1456]: time="2025-08-13T07:19:05.399943110Z" level=info msg="StopPodSandbox for \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\"" Aug 13 07:19:05.400183 containerd[1456]: time="2025-08-13T07:19:05.400050294Z" level=info msg="StopPodSandbox for \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\"" Aug 13 07:19:05.476131 kubelet[2512]: I0813 07:19:05.475055 2512 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 07:19:05.476131 kubelet[2512]: I0813 07:19:05.475106 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpfsd\" (UniqueName: \"kubernetes.io/projected/ab627325-2749-42aa-91f9-75c79fd24e77-kube-api-access-hpfsd\") on node \"localhost\" DevicePath \"\"" Aug 13 07:19:05.476131 kubelet[2512]: I0813 07:19:05.475122 2512 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab627325-2749-42aa-91f9-75c79fd24e77-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.442 [INFO][3903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.443 [INFO][3903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" iface="eth0" netns="/var/run/netns/cni-685e2811-9287-d13d-e1ba-4b545084481a" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.443 [INFO][3903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" iface="eth0" netns="/var/run/netns/cni-685e2811-9287-d13d-e1ba-4b545084481a" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.444 [INFO][3903] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" iface="eth0" netns="/var/run/netns/cni-685e2811-9287-d13d-e1ba-4b545084481a" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.444 [INFO][3903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.444 [INFO][3903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.470 [INFO][3918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.470 [INFO][3918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.470 [INFO][3918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.478 [WARNING][3918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.478 [INFO][3918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.480 [INFO][3918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:05.486958 containerd[1456]: 2025-08-13 07:19:05.483 [INFO][3903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:05.487747 containerd[1456]: time="2025-08-13T07:19:05.487114713Z" level=info msg="TearDown network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\" successfully" Aug 13 07:19:05.487747 containerd[1456]: time="2025-08-13T07:19:05.487142917Z" level=info msg="StopPodSandbox for \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\" returns successfully" Aug 13 07:19:05.488163 containerd[1456]: time="2025-08-13T07:19:05.488111646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcjzr,Uid:0ec0a1a1-c8b0-4122-ab58-78229dc90d73,Namespace:calico-system,Attempt:1,}" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.447 [INFO][3902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.447 [INFO][3902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" iface="eth0" netns="/var/run/netns/cni-10d49742-5a0f-4318-fa2f-1bc9ecb23230" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.447 [INFO][3902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" iface="eth0" netns="/var/run/netns/cni-10d49742-5a0f-4318-fa2f-1bc9ecb23230" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.448 [INFO][3902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" iface="eth0" netns="/var/run/netns/cni-10d49742-5a0f-4318-fa2f-1bc9ecb23230" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.448 [INFO][3902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.448 [INFO][3902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.476 [INFO][3924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.476 [INFO][3924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.480 [INFO][3924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.486 [WARNING][3924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.486 [INFO][3924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.488 [INFO][3924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:05.494475 containerd[1456]: 2025-08-13 07:19:05.491 [INFO][3902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:05.494889 containerd[1456]: time="2025-08-13T07:19:05.494652666Z" level=info msg="TearDown network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\" successfully" Aug 13 07:19:05.494889 containerd[1456]: time="2025-08-13T07:19:05.494680980Z" level=info msg="StopPodSandbox for \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\" returns successfully" Aug 13 07:19:05.495122 kubelet[2512]: E0813 07:19:05.495076 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:05.495623 containerd[1456]: time="2025-08-13T07:19:05.495577853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sx5l7,Uid:62b0e9a2-2b8a-410c-bf54-6c522a15fa93,Namespace:kube-system,Attempt:1,}" Aug 13 07:19:05.541824 systemd[1]: run-netns-cni\x2d685e2811\x2d9287\x2dd13d\x2de1ba\x2d4b545084481a.mount: Deactivated successfully. Aug 13 07:19:05.541951 systemd[1]: run-netns-cni\x2d2d65beb2\x2d472c\x2d0c29\x2db289\x2d55b743933a70.mount: Deactivated successfully. Aug 13 07:19:05.542027 systemd[1]: run-netns-cni\x2d10d49742\x2d5a0f\x2d4318\x2dfa2f\x2d1bc9ecb23230.mount: Deactivated successfully. Aug 13 07:19:05.542119 systemd[1]: var-lib-kubelet-pods-ab627325\x2d2749\x2d42aa\x2d91f9\x2d75c79fd24e77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpfsd.mount: Deactivated successfully. Aug 13 07:19:05.542211 systemd[1]: var-lib-kubelet-pods-ab627325\x2d2749\x2d42aa\x2d91f9\x2d75c79fd24e77-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:19:05.552198 systemd[1]: Removed slice kubepods-besteffort-podab627325_2749_42aa_91f9_75c79fd24e77.slice - libcontainer container kubepods-besteffort-podab627325_2749_42aa_91f9_75c79fd24e77.slice. Aug 13 07:19:05.653896 kubelet[2512]: I0813 07:19:05.653675 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2hwbc" podStartSLOduration=1.759793283 podStartE2EDuration="22.653643469s" podCreationTimestamp="2025-08-13 07:18:43 +0000 UTC" firstStartedPulling="2025-08-13 07:18:43.640606939 +0000 UTC m=+19.325317418" lastFinishedPulling="2025-08-13 07:19:04.534457125 +0000 UTC m=+40.219167604" observedRunningTime="2025-08-13 07:19:05.653208594 +0000 UTC m=+41.337919093" watchObservedRunningTime="2025-08-13 07:19:05.653643469 +0000 UTC m=+41.338353998" Aug 13 07:19:05.691668 systemd[1]: Created slice kubepods-besteffort-podbed36e62_773a_4f6d_b821_585c60f2d3c7.slice - libcontainer container kubepods-besteffort-podbed36e62_773a_4f6d_b821_585c60f2d3c7.slice. Aug 13 07:19:05.777961 kubelet[2512]: I0813 07:19:05.777879 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bed36e62-773a-4f6d-b821-585c60f2d3c7-whisker-backend-key-pair\") pod \"whisker-75b95fc767-dstlv\" (UID: \"bed36e62-773a-4f6d-b821-585c60f2d3c7\") " pod="calico-system/whisker-75b95fc767-dstlv" Aug 13 07:19:05.777961 kubelet[2512]: I0813 07:19:05.777958 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5tsr\" (UniqueName: \"kubernetes.io/projected/bed36e62-773a-4f6d-b821-585c60f2d3c7-kube-api-access-z5tsr\") pod \"whisker-75b95fc767-dstlv\" (UID: \"bed36e62-773a-4f6d-b821-585c60f2d3c7\") " pod="calico-system/whisker-75b95fc767-dstlv" Aug 13 07:19:05.778257 kubelet[2512]: I0813 07:19:05.777985 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed36e62-773a-4f6d-b821-585c60f2d3c7-whisker-ca-bundle\") pod \"whisker-75b95fc767-dstlv\" (UID: \"bed36e62-773a-4f6d-b821-585c60f2d3c7\") " pod="calico-system/whisker-75b95fc767-dstlv" Aug 13 07:19:05.813277 systemd-networkd[1393]: cali73cffbc4c27: Link UP Aug 13 07:19:05.814050 systemd-networkd[1393]: cali73cffbc4c27: Gained carrier Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.704 [INFO][3944] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.725 [INFO][3944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fcjzr-eth0 csi-node-driver- calico-system 0ec0a1a1-c8b0-4122-ab58-78229dc90d73 991 0 2025-08-13 07:18:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fcjzr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali73cffbc4c27 [] [] }} ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.725 [INFO][3944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.761 [INFO][3983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" HandleID="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.762 [INFO][3983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" HandleID="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c63a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fcjzr", "timestamp":"2025-08-13 07:19:05.761804448 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.762 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.762 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.762 [INFO][3983] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.770 [INFO][3983] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.780 [INFO][3983] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.787 [INFO][3983] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.789 [INFO][3983] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.790 [INFO][3983] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.790 [INFO][3983] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.792 [INFO][3983] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210 Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.795 [INFO][3983] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.800 [INFO][3983] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.800 [INFO][3983] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" host="localhost" Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.800 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:05.830576 containerd[1456]: 2025-08-13 07:19:05.800 [INFO][3983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" HandleID="k8s-pod-network.7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.831391 containerd[1456]: 2025-08-13 07:19:05.804 [INFO][3944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fcjzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ec0a1a1-c8b0-4122-ab58-78229dc90d73", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fcjzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73cffbc4c27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:05.831391 containerd[1456]: 2025-08-13 07:19:05.805 [INFO][3944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.831391 containerd[1456]: 2025-08-13 07:19:05.805 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73cffbc4c27 ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.831391 containerd[1456]: 2025-08-13 07:19:05.817 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.831391 containerd[1456]: 2025-08-13 07:19:05.817 [INFO][3944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fcjzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ec0a1a1-c8b0-4122-ab58-78229dc90d73", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210", Pod:"csi-node-driver-fcjzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73cffbc4c27", MAC:"ae:73:46:75:fd:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:05.831391 containerd[1456]: 2025-08-13 07:19:05.826 [INFO][3944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210" Namespace="calico-system" Pod="csi-node-driver-fcjzr" WorkloadEndpoint="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:05.857291 containerd[1456]: time="2025-08-13T07:19:05.857085308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:05.857291 containerd[1456]: time="2025-08-13T07:19:05.857149891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:05.857291 containerd[1456]: time="2025-08-13T07:19:05.857212279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:05.858261 containerd[1456]: time="2025-08-13T07:19:05.858163295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:05.877006 systemd[1]: Started cri-containerd-7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210.scope - libcontainer container 7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210. Aug 13 07:19:05.895248 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:05.923494 systemd-networkd[1393]: cali21447b7a22c: Link UP Aug 13 07:19:05.923769 containerd[1456]: time="2025-08-13T07:19:05.923672704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fcjzr,Uid:0ec0a1a1-c8b0-4122-ab58-78229dc90d73,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210\"" Aug 13 07:19:05.925119 systemd-networkd[1393]: cali21447b7a22c: Gained carrier Aug 13 07:19:05.926956 containerd[1456]: time="2025-08-13T07:19:05.926872892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.726 [INFO][3957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.737 [INFO][3957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0 coredns-674b8bbfcf- kube-system 62b0e9a2-2b8a-410c-bf54-6c522a15fa93 992 0 2025-08-13 07:18:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-sx5l7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21447b7a22c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.737 [INFO][3957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.772 [INFO][3990] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" HandleID="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.772 [INFO][3990] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" HandleID="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019e2b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-sx5l7", "timestamp":"2025-08-13 07:19:05.772583565 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.772 [INFO][3990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.800 [INFO][3990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.800 [INFO][3990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.872 [INFO][3990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.881 [INFO][3990] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.888 [INFO][3990] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.891 [INFO][3990] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.894 [INFO][3990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.894 [INFO][3990] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.896 [INFO][3990] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996 Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.908 [INFO][3990] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.915 [INFO][3990] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.915 [INFO][3990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" host="localhost" Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.915 [INFO][3990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:05.941294 containerd[1456]: 2025-08-13 07:19:05.915 [INFO][3990] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" HandleID="k8s-pod-network.32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.941897 containerd[1456]: 2025-08-13 07:19:05.920 [INFO][3957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"62b0e9a2-2b8a-410c-bf54-6c522a15fa93", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-sx5l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21447b7a22c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:05.941897 containerd[1456]: 2025-08-13 07:19:05.920 [INFO][3957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.941897 containerd[1456]: 2025-08-13 07:19:05.920 [INFO][3957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21447b7a22c ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.941897 containerd[1456]: 2025-08-13 07:19:05.926 [INFO][3957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.941897 containerd[1456]: 2025-08-13 07:19:05.926 [INFO][3957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"62b0e9a2-2b8a-410c-bf54-6c522a15fa93", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996", Pod:"coredns-674b8bbfcf-sx5l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21447b7a22c", MAC:"c2:38:52:31:ea:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:05.941897 containerd[1456]: 2025-08-13 07:19:05.937 [INFO][3957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996" Namespace="kube-system" Pod="coredns-674b8bbfcf-sx5l7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:05.960662 containerd[1456]: time="2025-08-13T07:19:05.960506395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:05.960662 containerd[1456]: time="2025-08-13T07:19:05.960606716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:05.960662 containerd[1456]: time="2025-08-13T07:19:05.960629018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:05.960932 containerd[1456]: time="2025-08-13T07:19:05.960785846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:05.986994 systemd[1]: Started cri-containerd-32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996.scope - libcontainer container 32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996. Aug 13 07:19:05.998506 containerd[1456]: time="2025-08-13T07:19:05.998395780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b95fc767-dstlv,Uid:bed36e62-773a-4f6d-b821-585c60f2d3c7,Namespace:calico-system,Attempt:0,}" Aug 13 07:19:06.002168 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:06.032572 containerd[1456]: time="2025-08-13T07:19:06.032515332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sx5l7,Uid:62b0e9a2-2b8a-410c-bf54-6c522a15fa93,Namespace:kube-system,Attempt:1,} returns sandbox id \"32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996\"" Aug 13 07:19:06.033218 kubelet[2512]: E0813 07:19:06.033187 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:06.040109 containerd[1456]: time="2025-08-13T07:19:06.040046188Z" level=info msg="CreateContainer within sandbox \"32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:19:06.060473 containerd[1456]: time="2025-08-13T07:19:06.060397471Z" level=info msg="CreateContainer within sandbox \"32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14ebca7dd534089c93fa119591a91686a158c9c1e733cc5cab5061f6ed2a1667\"" Aug 13 07:19:06.061779 containerd[1456]: time="2025-08-13T07:19:06.061723097Z" level=info msg="StartContainer for \"14ebca7dd534089c93fa119591a91686a158c9c1e733cc5cab5061f6ed2a1667\"" Aug 13 07:19:06.095018 systemd[1]: Started cri-containerd-14ebca7dd534089c93fa119591a91686a158c9c1e733cc5cab5061f6ed2a1667.scope - libcontainer container 14ebca7dd534089c93fa119591a91686a158c9c1e733cc5cab5061f6ed2a1667. Aug 13 07:19:06.129373 systemd-networkd[1393]: calid9a16f8e80e: Link UP Aug 13 07:19:06.135914 systemd-networkd[1393]: calid9a16f8e80e: Gained carrier Aug 13 07:19:06.139306 containerd[1456]: time="2025-08-13T07:19:06.138966381Z" level=info msg="StartContainer for \"14ebca7dd534089c93fa119591a91686a158c9c1e733cc5cab5061f6ed2a1667\" returns successfully" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.042 [INFO][4091] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.056 [INFO][4091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--75b95fc767--dstlv-eth0 whisker-75b95fc767- calico-system bed36e62-773a-4f6d-b821-585c60f2d3c7 1008 0 2025-08-13 07:19:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75b95fc767 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-75b95fc767-dstlv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid9a16f8e80e [] [] }} ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.056 [INFO][4091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-eth0" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.084 [INFO][4110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" HandleID="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Workload="localhost-k8s-whisker--75b95fc767--dstlv-eth0" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.085 [INFO][4110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" HandleID="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Workload="localhost-k8s-whisker--75b95fc767--dstlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-75b95fc767-dstlv", "timestamp":"2025-08-13 07:19:06.084887672 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.086 [INFO][4110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.086 [INFO][4110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.086 [INFO][4110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.093 [INFO][4110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.099 [INFO][4110] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.106 [INFO][4110] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.107 [INFO][4110] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.110 [INFO][4110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.110 [INFO][4110] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.112 [INFO][4110] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9 Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.115 [INFO][4110] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.122 [INFO][4110] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.122 [INFO][4110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" host="localhost" Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.122 [INFO][4110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:06.146892 containerd[1456]: 2025-08-13 07:19:06.122 [INFO][4110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" HandleID="k8s-pod-network.b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Workload="localhost-k8s-whisker--75b95fc767--dstlv-eth0" Aug 13 07:19:06.147470 containerd[1456]: 2025-08-13 07:19:06.127 [INFO][4091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75b95fc767--dstlv-eth0", GenerateName:"whisker-75b95fc767-", Namespace:"calico-system", SelfLink:"", UID:"bed36e62-773a-4f6d-b821-585c60f2d3c7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75b95fc767", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-75b95fc767-dstlv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid9a16f8e80e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:06.147470 containerd[1456]: 2025-08-13 07:19:06.127 [INFO][4091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-eth0" Aug 13 07:19:06.147470 containerd[1456]: 2025-08-13 07:19:06.127 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9a16f8e80e ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-eth0" Aug 13 07:19:06.147470 containerd[1456]: 2025-08-13 07:19:06.133 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-eth0" Aug 13 07:19:06.147470 containerd[1456]: 2025-08-13 07:19:06.134 [INFO][4091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75b95fc767--dstlv-eth0", GenerateName:"whisker-75b95fc767-", Namespace:"calico-system", SelfLink:"", UID:"bed36e62-773a-4f6d-b821-585c60f2d3c7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75b95fc767", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9", Pod:"whisker-75b95fc767-dstlv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid9a16f8e80e", MAC:"92:6f:fe:71:01:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:06.147470 containerd[1456]: 2025-08-13 07:19:06.142 [INFO][4091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9" Namespace="calico-system" Pod="whisker-75b95fc767-dstlv" WorkloadEndpoint="localhost-k8s-whisker--75b95fc767--dstlv-eth0" Aug 13 07:19:06.171016 containerd[1456]: time="2025-08-13T07:19:06.170883190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:06.171367 containerd[1456]: time="2025-08-13T07:19:06.170943754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:06.171367 containerd[1456]: time="2025-08-13T07:19:06.171073771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:06.171367 containerd[1456]: time="2025-08-13T07:19:06.171246579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:06.196558 systemd[1]: Started cri-containerd-b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9.scope - libcontainer container b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9. Aug 13 07:19:06.215569 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:06.258168 containerd[1456]: time="2025-08-13T07:19:06.257944010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b95fc767-dstlv,Uid:bed36e62-773a-4f6d-b821-585c60f2d3c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9\"" Aug 13 07:19:06.410932 containerd[1456]: time="2025-08-13T07:19:06.409876608Z" level=info msg="StopPodSandbox for \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\"" Aug 13 07:19:06.412741 kubelet[2512]: I0813 07:19:06.412000 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab627325-2749-42aa-91f9-75c79fd24e77" path="/var/lib/kubelet/pods/ab627325-2749-42aa-91f9-75c79fd24e77/volumes" Aug 13 07:19:06.552332 kubelet[2512]: E0813 07:19:06.551905 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:06.724759 kubelet[2512]: I0813 07:19:06.722717 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sx5l7" podStartSLOduration=35.72269965 podStartE2EDuration="35.72269965s" podCreationTimestamp="2025-08-13 07:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:06.696109122 +0000 UTC m=+42.380819601" watchObservedRunningTime="2025-08-13 07:19:06.72269965 +0000 UTC m=+42.407410129" Aug 13 07:19:06.760048 systemd[1]: run-containerd-runc-k8s.io-7ddea1a0b005a35b287e537906edb896e15e0f20f9eea53108fa796ade398e17-runc.aOIeBe.mount: Deactivated successfully. Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.634 [INFO][4281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.634 [INFO][4281] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" iface="eth0" netns="/var/run/netns/cni-8c98f9e7-3ac5-7acd-d1b9-b5c18494879f" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.635 [INFO][4281] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" iface="eth0" netns="/var/run/netns/cni-8c98f9e7-3ac5-7acd-d1b9-b5c18494879f" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.635 [INFO][4281] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" iface="eth0" netns="/var/run/netns/cni-8c98f9e7-3ac5-7acd-d1b9-b5c18494879f" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.635 [INFO][4281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.635 [INFO][4281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.792 [INFO][4293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.793 [INFO][4293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.794 [INFO][4293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.805 [WARNING][4293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.805 [INFO][4293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.808 [INFO][4293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:06.819765 containerd[1456]: 2025-08-13 07:19:06.814 [INFO][4281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:06.822506 containerd[1456]: time="2025-08-13T07:19:06.822455961Z" level=info msg="TearDown network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\" successfully" Aug 13 07:19:06.822558 containerd[1456]: time="2025-08-13T07:19:06.822507168Z" level=info msg="StopPodSandbox for \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\" returns successfully" Aug 13 07:19:06.823002 kubelet[2512]: E0813 07:19:06.822969 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:06.823674 containerd[1456]: time="2025-08-13T07:19:06.823651841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xx8kw,Uid:7664d1a0-e7f0-48d5-bd0d-61e02b72f59f,Namespace:kube-system,Attempt:1,}" Aug 13 07:19:06.826961 systemd[1]: run-netns-cni\x2d8c98f9e7\x2d3ac5\x2d7acd\x2dd1b9\x2db5c18494879f.mount: Deactivated successfully. Aug 13 07:19:06.842849 kernel: bpftool[4386]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:19:07.023192 systemd-networkd[1393]: calib177ba84fa6: Link UP Aug 13 07:19:07.023454 systemd-networkd[1393]: calib177ba84fa6: Gained carrier Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.933 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0 coredns-674b8bbfcf- kube-system 7664d1a0-e7f0-48d5-bd0d-61e02b72f59f 1036 0 2025-08-13 07:18:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xx8kw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib177ba84fa6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.933 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.969 [INFO][4405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" HandleID="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.969 [INFO][4405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" HandleID="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xx8kw", "timestamp":"2025-08-13 07:19:06.969331289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.969 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.969 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.972 [INFO][4405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.983 [INFO][4405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.990 [INFO][4405] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.995 [INFO][4405] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.997 [INFO][4405] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.999 [INFO][4405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:06.999 [INFO][4405] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:07.002 [INFO][4405] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:07.007 [INFO][4405] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:07.013 [INFO][4405] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:07.013 [INFO][4405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" host="localhost" Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:07.013 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.039457 containerd[1456]: 2025-08-13 07:19:07.013 [INFO][4405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" HandleID="k8s-pod-network.0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:07.040933 containerd[1456]: 2025-08-13 07:19:07.017 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xx8kw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib177ba84fa6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.040933 containerd[1456]: 2025-08-13 07:19:07.018 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:07.040933 containerd[1456]: 2025-08-13 07:19:07.018 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib177ba84fa6 ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:07.040933 containerd[1456]: 2025-08-13 07:19:07.024 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:07.040933 containerd[1456]: 2025-08-13 07:19:07.025 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c", Pod:"coredns-674b8bbfcf-xx8kw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib177ba84fa6", MAC:"86:81:53:8e:e1:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.040933 containerd[1456]: 2025-08-13 07:19:07.033 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c" Namespace="kube-system" Pod="coredns-674b8bbfcf-xx8kw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:07.064554 containerd[1456]: time="2025-08-13T07:19:07.064044427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:07.064554 containerd[1456]: time="2025-08-13T07:19:07.064164375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:07.064554 containerd[1456]: time="2025-08-13T07:19:07.064186106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:07.064554 containerd[1456]: time="2025-08-13T07:19:07.064337734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:07.088029 systemd[1]: Started cri-containerd-0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c.scope - libcontainer container 0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c. Aug 13 07:19:07.105748 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:07.144744 containerd[1456]: time="2025-08-13T07:19:07.144702267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xx8kw,Uid:7664d1a0-e7f0-48d5-bd0d-61e02b72f59f,Namespace:kube-system,Attempt:1,} returns sandbox id \"0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c\"" Aug 13 07:19:07.145979 kubelet[2512]: E0813 07:19:07.145956 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:07.154594 containerd[1456]: time="2025-08-13T07:19:07.154539607Z" level=info msg="CreateContainer within sandbox \"0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:19:07.161980 systemd-networkd[1393]: vxlan.calico: Link UP Aug 13 07:19:07.161990 systemd-networkd[1393]: vxlan.calico: Gained carrier Aug 13 07:19:07.182600 containerd[1456]: time="2025-08-13T07:19:07.182412727Z" level=info msg="CreateContainer within sandbox \"0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3539e5eabb28cc544502e4e507d4e373659bb9552e6b840c1619d581ed0bc4c3\"" Aug 13 07:19:07.183690 containerd[1456]: time="2025-08-13T07:19:07.183653622Z" level=info msg="StartContainer for \"3539e5eabb28cc544502e4e507d4e373659bb9552e6b840c1619d581ed0bc4c3\"" Aug 13 07:19:07.200293 systemd-networkd[1393]: calid9a16f8e80e: Gained IPv6LL Aug 13 07:19:07.221064 systemd[1]: Started cri-containerd-3539e5eabb28cc544502e4e507d4e373659bb9552e6b840c1619d581ed0bc4c3.scope - libcontainer container 3539e5eabb28cc544502e4e507d4e373659bb9552e6b840c1619d581ed0bc4c3. Aug 13 07:19:07.254009 containerd[1456]: time="2025-08-13T07:19:07.253927092Z" level=info msg="StartContainer for \"3539e5eabb28cc544502e4e507d4e373659bb9552e6b840c1619d581ed0bc4c3\" returns successfully" Aug 13 07:19:07.400308 containerd[1456]: time="2025-08-13T07:19:07.399889772Z" level=info msg="StopPodSandbox for \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\"" Aug 13 07:19:07.400308 containerd[1456]: time="2025-08-13T07:19:07.400112635Z" level=info msg="StopPodSandbox for \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\"" Aug 13 07:19:07.442004 systemd-networkd[1393]: cali73cffbc4c27: Gained IPv6LL Aug 13 07:19:07.508735 systemd-networkd[1393]: cali21447b7a22c: Gained IPv6LL Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.451 [INFO][4562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.451 [INFO][4562] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" iface="eth0" netns="/var/run/netns/cni-ad36e352-2fd3-a2c4-bcd6-0b1f955f98f9" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.452 [INFO][4562] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" iface="eth0" netns="/var/run/netns/cni-ad36e352-2fd3-a2c4-bcd6-0b1f955f98f9" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.455 [INFO][4562] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" iface="eth0" netns="/var/run/netns/cni-ad36e352-2fd3-a2c4-bcd6-0b1f955f98f9" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.456 [INFO][4562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.456 [INFO][4562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.491 [INFO][4596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.492 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.492 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.504 [WARNING][4596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.507 [INFO][4596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.512 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.524239 containerd[1456]: 2025-08-13 07:19:07.519 [INFO][4562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:07.524981 containerd[1456]: time="2025-08-13T07:19:07.524481021Z" level=info msg="TearDown network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\" successfully" Aug 13 07:19:07.524981 containerd[1456]: time="2025-08-13T07:19:07.524522220Z" level=info msg="StopPodSandbox for \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\" returns successfully" Aug 13 07:19:07.525616 containerd[1456]: time="2025-08-13T07:19:07.525587360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-nrt5s,Uid:2740fd78-4ba0-40d0-9638-65458c5f2e1e,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.450 [INFO][4561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.451 [INFO][4561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" iface="eth0" netns="/var/run/netns/cni-46a40235-4307-bec0-06b4-d9811778f63b" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.452 [INFO][4561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" iface="eth0" netns="/var/run/netns/cni-46a40235-4307-bec0-06b4-d9811778f63b" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.453 [INFO][4561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" iface="eth0" netns="/var/run/netns/cni-46a40235-4307-bec0-06b4-d9811778f63b" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.453 [INFO][4561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.453 [INFO][4561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.496 [INFO][4589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.497 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.512 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.523 [WARNING][4589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.523 [INFO][4589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.525 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.532455 containerd[1456]: 2025-08-13 07:19:07.528 [INFO][4561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:07.533005 containerd[1456]: time="2025-08-13T07:19:07.532881795Z" level=info msg="TearDown network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\" successfully" Aug 13 07:19:07.533005 containerd[1456]: time="2025-08-13T07:19:07.532909117Z" level=info msg="StopPodSandbox for \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\" returns successfully" Aug 13 07:19:07.533689 containerd[1456]: time="2025-08-13T07:19:07.533655273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-5xw68,Uid:bab077f6-800e-450e-ac7f-4fa8a8599eca,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:19:07.546365 systemd[1]: run-netns-cni\x2d46a40235\x2d4307\x2dbec0\x2d06b4\x2dd9811778f63b.mount: Deactivated successfully. Aug 13 07:19:07.546515 systemd[1]: run-netns-cni\x2dad36e352\x2d2fd3\x2da2c4\x2dbcd6\x2d0b1f955f98f9.mount: Deactivated successfully. Aug 13 07:19:07.566851 kubelet[2512]: E0813 07:19:07.566788 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:07.568383 kubelet[2512]: E0813 07:19:07.567465 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:07.597490 kubelet[2512]: I0813 07:19:07.597376 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xx8kw" podStartSLOduration=36.597338887 podStartE2EDuration="36.597338887s" podCreationTimestamp="2025-08-13 07:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:07.58202898 +0000 UTC m=+43.266739459" watchObservedRunningTime="2025-08-13 07:19:07.597338887 +0000 UTC m=+43.282049366" Aug 13 07:19:07.692793 systemd-networkd[1393]: cali6043a71b0b4: Link UP Aug 13 07:19:07.693972 systemd-networkd[1393]: cali6043a71b0b4: Gained carrier Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.613 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0 calico-apiserver-655dd967b8- calico-apiserver 2740fd78-4ba0-40d0-9638-65458c5f2e1e 1058 0 2025-08-13 07:18:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655dd967b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655dd967b8-nrt5s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6043a71b0b4 [] [] }} ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.613 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.650 [INFO][4663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" HandleID="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.651 [INFO][4663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" HandleID="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655dd967b8-nrt5s", "timestamp":"2025-08-13 07:19:07.650944915 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.651 [INFO][4663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.651 [INFO][4663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.651 [INFO][4663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.661 [INFO][4663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.665 [INFO][4663] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.669 [INFO][4663] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.671 [INFO][4663] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.673 [INFO][4663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.674 [INFO][4663] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.676 [INFO][4663] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52 Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.679 [INFO][4663] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.686 [INFO][4663] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.686 [INFO][4663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" host="localhost" Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.686 [INFO][4663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.708651 containerd[1456]: 2025-08-13 07:19:07.686 [INFO][4663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" HandleID="k8s-pod-network.88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.709230 containerd[1456]: 2025-08-13 07:19:07.689 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"2740fd78-4ba0-40d0-9638-65458c5f2e1e", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655dd967b8-nrt5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6043a71b0b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.709230 containerd[1456]: 2025-08-13 07:19:07.689 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.709230 containerd[1456]: 2025-08-13 07:19:07.689 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6043a71b0b4 ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.709230 containerd[1456]: 2025-08-13 07:19:07.694 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.709230 containerd[1456]: 2025-08-13 07:19:07.695 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"2740fd78-4ba0-40d0-9638-65458c5f2e1e", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52", Pod:"calico-apiserver-655dd967b8-nrt5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6043a71b0b4", MAC:"9a:b3:4b:05:35:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.709230 containerd[1456]: 2025-08-13 07:19:07.705 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-nrt5s" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:07.732700 containerd[1456]: time="2025-08-13T07:19:07.732556474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:07.732700 containerd[1456]: time="2025-08-13T07:19:07.732641505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:07.732700 containerd[1456]: time="2025-08-13T07:19:07.732659138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:07.733394 containerd[1456]: time="2025-08-13T07:19:07.733290777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:07.762025 systemd[1]: Started cri-containerd-88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52.scope - libcontainer container 88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52. Aug 13 07:19:07.785690 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:07.821268 systemd-networkd[1393]: calib8e3ee6909a: Link UP Aug 13 07:19:07.821757 systemd-networkd[1393]: calib8e3ee6909a: Gained carrier Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.632 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0 calico-apiserver-655dd967b8- calico-apiserver bab077f6-800e-450e-ac7f-4fa8a8599eca 1057 0 2025-08-13 07:18:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655dd967b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655dd967b8-5xw68 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib8e3ee6909a [] [] }} ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.633 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.662 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" HandleID="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.662 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" HandleID="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655dd967b8-5xw68", "timestamp":"2025-08-13 07:19:07.662567804 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.662 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.686 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.686 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.764 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.775 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.781 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.784 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.785 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.786 [INFO][4670] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.787 [INFO][4670] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.797 [INFO][4670] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.806 [INFO][4670] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.806 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" host="localhost" Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.806 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:07.837858 containerd[1456]: 2025-08-13 07:19:07.806 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" HandleID="k8s-pod-network.95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.842147 containerd[1456]: 2025-08-13 07:19:07.812 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"bab077f6-800e-450e-ac7f-4fa8a8599eca", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655dd967b8-5xw68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e3ee6909a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.842147 containerd[1456]: 2025-08-13 07:19:07.812 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.842147 containerd[1456]: 2025-08-13 07:19:07.813 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8e3ee6909a ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.842147 containerd[1456]: 2025-08-13 07:19:07.819 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.842147 containerd[1456]: 2025-08-13 07:19:07.819 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"bab077f6-800e-450e-ac7f-4fa8a8599eca", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b", Pod:"calico-apiserver-655dd967b8-5xw68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e3ee6909a", MAC:"f2:5f:d7:75:ae:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:07.842147 containerd[1456]: 2025-08-13 07:19:07.830 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b" Namespace="calico-apiserver" Pod="calico-apiserver-655dd967b8-5xw68" WorkloadEndpoint="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:07.842147 containerd[1456]: time="2025-08-13T07:19:07.836461052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-nrt5s,Uid:2740fd78-4ba0-40d0-9638-65458c5f2e1e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52\"" Aug 13 07:19:08.201166 containerd[1456]: time="2025-08-13T07:19:08.201037981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:08.201166 containerd[1456]: time="2025-08-13T07:19:08.201108604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:08.201166 containerd[1456]: time="2025-08-13T07:19:08.201121089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:08.201432 containerd[1456]: time="2025-08-13T07:19:08.201255984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:08.226996 systemd[1]: Started cri-containerd-95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b.scope - libcontainer container 95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b. Aug 13 07:19:08.243298 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:08.279269 containerd[1456]: time="2025-08-13T07:19:08.279215870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655dd967b8-5xw68,Uid:bab077f6-800e-450e-ac7f-4fa8a8599eca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b\"" Aug 13 07:19:08.295641 containerd[1456]: time="2025-08-13T07:19:08.295552898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:08.296436 containerd[1456]: time="2025-08-13T07:19:08.296355282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:19:08.297712 containerd[1456]: time="2025-08-13T07:19:08.297656810Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:08.300232 containerd[1456]: time="2025-08-13T07:19:08.300187913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:08.300963 containerd[1456]: time="2025-08-13T07:19:08.300932817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.374016513s" Aug 13 07:19:08.300963 containerd[1456]: time="2025-08-13T07:19:08.300968384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:19:08.310778 containerd[1456]: time="2025-08-13T07:19:08.310580163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:19:08.315781 containerd[1456]: time="2025-08-13T07:19:08.315738420Z" level=info msg="CreateContainer within sandbox \"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:19:08.571408 kubelet[2512]: E0813 07:19:08.571272 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:08.571903 kubelet[2512]: E0813 07:19:08.571455 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:08.660087 containerd[1456]: time="2025-08-13T07:19:08.660028165Z" level=info msg="CreateContainer within sandbox \"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"561bc002187dfc2e8c2c4c6754733de5fc51035eefbe06219bc8cba450a4953d\"" Aug 13 07:19:08.660872 containerd[1456]: time="2025-08-13T07:19:08.660830087Z" level=info msg="StartContainer for \"561bc002187dfc2e8c2c4c6754733de5fc51035eefbe06219bc8cba450a4953d\"" Aug 13 07:19:08.694626 systemd[1]: run-containerd-runc-k8s.io-561bc002187dfc2e8c2c4c6754733de5fc51035eefbe06219bc8cba450a4953d-runc.U9a8IP.mount: Deactivated successfully. Aug 13 07:19:08.703987 systemd[1]: Started cri-containerd-561bc002187dfc2e8c2c4c6754733de5fc51035eefbe06219bc8cba450a4953d.scope - libcontainer container 561bc002187dfc2e8c2c4c6754733de5fc51035eefbe06219bc8cba450a4953d. Aug 13 07:19:08.742652 containerd[1456]: time="2025-08-13T07:19:08.742604240Z" level=info msg="StartContainer for \"561bc002187dfc2e8c2c4c6754733de5fc51035eefbe06219bc8cba450a4953d\" returns successfully" Aug 13 07:19:08.914633 systemd-networkd[1393]: calib8e3ee6909a: Gained IPv6LL Aug 13 07:19:08.915161 systemd-networkd[1393]: calib177ba84fa6: Gained IPv6LL Aug 13 07:19:08.915428 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Aug 13 07:19:09.298061 systemd-networkd[1393]: cali6043a71b0b4: Gained IPv6LL Aug 13 07:19:09.575469 kubelet[2512]: E0813 07:19:09.575325 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:09.939506 containerd[1456]: time="2025-08-13T07:19:09.939427158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:09.940278 containerd[1456]: time="2025-08-13T07:19:09.940237755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:19:09.941548 containerd[1456]: time="2025-08-13T07:19:09.941516682Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:09.943886 containerd[1456]: time="2025-08-13T07:19:09.943857542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:09.944538 containerd[1456]: time="2025-08-13T07:19:09.944513777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.63388933s" Aug 13 07:19:09.944573 containerd[1456]: time="2025-08-13T07:19:09.944542632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:19:09.946244 containerd[1456]: time="2025-08-13T07:19:09.946091039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:19:09.949893 containerd[1456]: time="2025-08-13T07:19:09.949853337Z" level=info msg="CreateContainer within sandbox \"b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:19:09.964664 containerd[1456]: time="2025-08-13T07:19:09.964615462Z" level=info msg="CreateContainer within sandbox \"b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f8b839f630756b02f7479c2b51a3b00dbfe09081101045de7cffb3c36f235759\"" Aug 13 07:19:09.965581 containerd[1456]: time="2025-08-13T07:19:09.965233794Z" level=info msg="StartContainer for \"f8b839f630756b02f7479c2b51a3b00dbfe09081101045de7cffb3c36f235759\"" Aug 13 07:19:09.999948 systemd[1]: Started cri-containerd-f8b839f630756b02f7479c2b51a3b00dbfe09081101045de7cffb3c36f235759.scope - libcontainer container f8b839f630756b02f7479c2b51a3b00dbfe09081101045de7cffb3c36f235759. Aug 13 07:19:10.082830 containerd[1456]: time="2025-08-13T07:19:10.082770258Z" level=info msg="StartContainer for \"f8b839f630756b02f7479c2b51a3b00dbfe09081101045de7cffb3c36f235759\" returns successfully" Aug 13 07:19:10.305623 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:43932.service - OpenSSH per-connection server daemon (10.0.0.1:43932). Aug 13 07:19:10.360556 sshd[4869]: Accepted publickey for core from 10.0.0.1 port 43932 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:10.362502 sshd[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:10.366926 systemd-logind[1436]: New session 9 of user core. Aug 13 07:19:10.375969 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:19:10.510732 sshd[4869]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:10.516113 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:43932.service: Deactivated successfully. Aug 13 07:19:10.518109 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:19:10.518721 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:19:10.519762 systemd-logind[1436]: Removed session 9. Aug 13 07:19:12.191660 containerd[1456]: time="2025-08-13T07:19:12.191608652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:12.192534 containerd[1456]: time="2025-08-13T07:19:12.192270697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:19:12.193774 containerd[1456]: time="2025-08-13T07:19:12.193709736Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:12.196245 containerd[1456]: time="2025-08-13T07:19:12.196200248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:12.196881 containerd[1456]: time="2025-08-13T07:19:12.196842436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.2507218s" Aug 13 07:19:12.196881 containerd[1456]: time="2025-08-13T07:19:12.196876841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:19:12.197890 containerd[1456]: time="2025-08-13T07:19:12.197863422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:19:12.202027 containerd[1456]: time="2025-08-13T07:19:12.201990396Z" level=info msg="CreateContainer within sandbox \"88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:19:12.240094 containerd[1456]: time="2025-08-13T07:19:12.240035130Z" level=info msg="CreateContainer within sandbox \"88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25e271654f4471a7ef6735388cc0ad395ca386eb435cd1161759d5df5b347f6c\"" Aug 13 07:19:12.240874 containerd[1456]: time="2025-08-13T07:19:12.240807654Z" level=info msg="StartContainer for \"25e271654f4471a7ef6735388cc0ad395ca386eb435cd1161759d5df5b347f6c\"" Aug 13 07:19:12.281991 systemd[1]: Started cri-containerd-25e271654f4471a7ef6735388cc0ad395ca386eb435cd1161759d5df5b347f6c.scope - libcontainer container 25e271654f4471a7ef6735388cc0ad395ca386eb435cd1161759d5df5b347f6c. Aug 13 07:19:12.330311 containerd[1456]: time="2025-08-13T07:19:12.330240560Z" level=info msg="StartContainer for \"25e271654f4471a7ef6735388cc0ad395ca386eb435cd1161759d5df5b347f6c\" returns successfully" Aug 13 07:19:12.602573 kubelet[2512]: I0813 07:19:12.602381 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655dd967b8-nrt5s" podStartSLOduration=28.242953109 podStartE2EDuration="32.602362858s" podCreationTimestamp="2025-08-13 07:18:40 +0000 UTC" firstStartedPulling="2025-08-13 07:19:07.838255738 +0000 UTC m=+43.522966217" lastFinishedPulling="2025-08-13 07:19:12.197665486 +0000 UTC m=+47.882375966" observedRunningTime="2025-08-13 07:19:12.601090505 +0000 UTC m=+48.285800984" watchObservedRunningTime="2025-08-13 07:19:12.602362858 +0000 UTC m=+48.287073337" Aug 13 07:19:12.688794 containerd[1456]: time="2025-08-13T07:19:12.688716276Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:12.690224 containerd[1456]: time="2025-08-13T07:19:12.689487047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:19:12.691929 containerd[1456]: time="2025-08-13T07:19:12.691885795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 493.993739ms" Aug 13 07:19:12.691929 containerd[1456]: time="2025-08-13T07:19:12.691928355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:19:12.693099 containerd[1456]: time="2025-08-13T07:19:12.693063718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:19:12.697316 containerd[1456]: time="2025-08-13T07:19:12.697264804Z" level=info msg="CreateContainer within sandbox \"95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:19:12.712010 containerd[1456]: time="2025-08-13T07:19:12.711958613Z" level=info msg="CreateContainer within sandbox \"95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"51b759a7a55dddcb1c0b137a1ba84c97bb485d838cea4127eba58afdca874730\"" Aug 13 07:19:12.712731 containerd[1456]: time="2025-08-13T07:19:12.712591874Z" level=info msg="StartContainer for \"51b759a7a55dddcb1c0b137a1ba84c97bb485d838cea4127eba58afdca874730\"" Aug 13 07:19:12.744974 systemd[1]: Started cri-containerd-51b759a7a55dddcb1c0b137a1ba84c97bb485d838cea4127eba58afdca874730.scope - libcontainer container 51b759a7a55dddcb1c0b137a1ba84c97bb485d838cea4127eba58afdca874730. Aug 13 07:19:12.791723 containerd[1456]: time="2025-08-13T07:19:12.791668434Z" level=info msg="StartContainer for \"51b759a7a55dddcb1c0b137a1ba84c97bb485d838cea4127eba58afdca874730\" returns successfully" Aug 13 07:19:13.587198 kubelet[2512]: I0813 07:19:13.587159 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:19:13.697462 kubelet[2512]: I0813 07:19:13.697369 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655dd967b8-5xw68" podStartSLOduration=29.285714725 podStartE2EDuration="33.697348543s" podCreationTimestamp="2025-08-13 07:18:40 +0000 UTC" firstStartedPulling="2025-08-13 07:19:08.281168895 +0000 UTC m=+43.965879374" lastFinishedPulling="2025-08-13 07:19:12.692802713 +0000 UTC m=+48.377513192" observedRunningTime="2025-08-13 07:19:13.614136778 +0000 UTC m=+49.298847267" watchObservedRunningTime="2025-08-13 07:19:13.697348543 +0000 UTC m=+49.382059012" Aug 13 07:19:14.564538 containerd[1456]: time="2025-08-13T07:19:14.564460923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:14.565410 containerd[1456]: time="2025-08-13T07:19:14.565345370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:19:14.566694 containerd[1456]: time="2025-08-13T07:19:14.566650774Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:14.569178 containerd[1456]: time="2025-08-13T07:19:14.569129262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:14.569682 containerd[1456]: time="2025-08-13T07:19:14.569649167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.876555362s" Aug 13 07:19:14.569738 containerd[1456]: time="2025-08-13T07:19:14.569680436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:19:14.571057 containerd[1456]: time="2025-08-13T07:19:14.570657648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:19:14.574570 containerd[1456]: time="2025-08-13T07:19:14.574538013Z" level=info msg="CreateContainer within sandbox \"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:19:14.606706 containerd[1456]: time="2025-08-13T07:19:14.606642935Z" level=info msg="CreateContainer within sandbox \"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ef09117f823296fef0697fd78e9b5e847bdc49fff35361ecf0975618fffffca3\"" Aug 13 07:19:14.607203 containerd[1456]: time="2025-08-13T07:19:14.607171417Z" level=info msg="StartContainer for \"ef09117f823296fef0697fd78e9b5e847bdc49fff35361ecf0975618fffffca3\"" Aug 13 07:19:14.649036 systemd[1]: Started cri-containerd-ef09117f823296fef0697fd78e9b5e847bdc49fff35361ecf0975618fffffca3.scope - libcontainer container ef09117f823296fef0697fd78e9b5e847bdc49fff35361ecf0975618fffffca3. Aug 13 07:19:14.680216 containerd[1456]: time="2025-08-13T07:19:14.680168598Z" level=info msg="StartContainer for \"ef09117f823296fef0697fd78e9b5e847bdc49fff35361ecf0975618fffffca3\" returns successfully" Aug 13 07:19:15.462209 kubelet[2512]: I0813 07:19:15.462163 2512 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:19:15.485861 kubelet[2512]: I0813 07:19:15.485831 2512 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:19:15.527877 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:43944.service - OpenSSH per-connection server daemon (10.0.0.1:43944). Aug 13 07:19:15.592587 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 43944 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:15.594982 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:15.600383 systemd-logind[1436]: New session 10 of user core. Aug 13 07:19:15.608124 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:19:16.138639 sshd[5034]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:16.142322 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:43944.service: Deactivated successfully. Aug 13 07:19:16.144478 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:19:16.145281 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:19:16.146186 systemd-logind[1436]: Removed session 10. Aug 13 07:19:16.400230 containerd[1456]: time="2025-08-13T07:19:16.399975289Z" level=info msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\"" Aug 13 07:19:17.038754 kubelet[2512]: I0813 07:19:17.038606 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fcjzr" podStartSLOduration=25.394391547 podStartE2EDuration="34.038576535s" podCreationTimestamp="2025-08-13 07:18:43 +0000 UTC" firstStartedPulling="2025-08-13 07:19:05.926354908 +0000 UTC m=+41.611065387" lastFinishedPulling="2025-08-13 07:19:14.570539895 +0000 UTC m=+50.255250375" observedRunningTime="2025-08-13 07:19:16.131311348 +0000 UTC m=+51.816021827" watchObservedRunningTime="2025-08-13 07:19:17.038576535 +0000 UTC m=+52.723287014" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.038 [INFO][5059] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.038 [INFO][5059] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" iface="eth0" netns="/var/run/netns/cni-c393dff1-19a9-a9c5-47e3-50af6f2246c7" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.039 [INFO][5059] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" iface="eth0" netns="/var/run/netns/cni-c393dff1-19a9-a9c5-47e3-50af6f2246c7" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.039 [INFO][5059] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" iface="eth0" netns="/var/run/netns/cni-c393dff1-19a9-a9c5-47e3-50af6f2246c7" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.040 [INFO][5059] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.040 [INFO][5059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.062 [INFO][5068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.062 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.062 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.085 [WARNING][5068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.085 [INFO][5068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.086 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:17.094033 containerd[1456]: 2025-08-13 07:19:17.089 [INFO][5059] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:17.094550 containerd[1456]: time="2025-08-13T07:19:17.094347492Z" level=info msg="TearDown network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" successfully" Aug 13 07:19:17.094550 containerd[1456]: time="2025-08-13T07:19:17.094382548Z" level=info msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" returns successfully" Aug 13 07:19:17.095380 containerd[1456]: time="2025-08-13T07:19:17.095349280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lknln,Uid:c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b,Namespace:calico-system,Attempt:1,}" Aug 13 07:19:17.097614 systemd[1]: run-netns-cni\x2dc393dff1\x2d19a9\x2da9c5\x2d47e3\x2d50af6f2246c7.mount: Deactivated successfully. Aug 13 07:19:17.663552 systemd-networkd[1393]: cali00b0e44d8aa: Link UP Aug 13 07:19:17.664662 systemd-networkd[1393]: cali00b0e44d8aa: Gained carrier Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.599 [INFO][5082] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--lknln-eth0 goldmane-768f4c5c69- calico-system c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b 1156 0 2025-08-13 07:18:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-lknln eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali00b0e44d8aa [] [] }} ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.599 [INFO][5082] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.624 [INFO][5096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" HandleID="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.625 [INFO][5096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" HandleID="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021d7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-lknln", "timestamp":"2025-08-13 07:19:17.624938182 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.625 [INFO][5096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.625 [INFO][5096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.625 [INFO][5096] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.632 [INFO][5096] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.636 [INFO][5096] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.640 [INFO][5096] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.642 [INFO][5096] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.644 [INFO][5096] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.644 [INFO][5096] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.645 [INFO][5096] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.648 [INFO][5096] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.655 [INFO][5096] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.655 [INFO][5096] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" host="localhost" Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.655 [INFO][5096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:17.875348 containerd[1456]: 2025-08-13 07:19:17.655 [INFO][5096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" HandleID="k8s-pod-network.1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:18.078625 containerd[1456]: 2025-08-13 07:19:17.659 [INFO][5082] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lknln-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-lknln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00b0e44d8aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:18.078625 containerd[1456]: 2025-08-13 07:19:17.659 [INFO][5082] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:18.078625 containerd[1456]: 2025-08-13 07:19:17.660 [INFO][5082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00b0e44d8aa ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:18.078625 containerd[1456]: 2025-08-13 07:19:17.665 [INFO][5082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:18.078625 containerd[1456]: 2025-08-13 07:19:17.665 [INFO][5082] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lknln-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa", Pod:"goldmane-768f4c5c69-lknln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00b0e44d8aa", MAC:"16:4c:cb:ca:c9:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:18.078625 containerd[1456]: 2025-08-13 07:19:17.872 [INFO][5082] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa" Namespace="calico-system" Pod="goldmane-768f4c5c69-lknln" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:18.399907 containerd[1456]: time="2025-08-13T07:19:18.399717900Z" level=info msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\"" Aug 13 07:19:18.966035 systemd-journald[1117]: Under memory pressure, flushing caches. Aug 13 07:19:18.986335 containerd[1456]: time="2025-08-13T07:19:18.985914564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:18.986335 containerd[1456]: time="2025-08-13T07:19:18.985971943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:18.986335 containerd[1456]: time="2025-08-13T07:19:18.985986039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:18.986335 containerd[1456]: time="2025-08-13T07:19:18.986061944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:18.942 [INFO][5128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:18.943 [INFO][5128] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" iface="eth0" netns="/var/run/netns/cni-15d0fe90-745f-7238-21be-364c2398ce82" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:18.944 [INFO][5128] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" iface="eth0" netns="/var/run/netns/cni-15d0fe90-745f-7238-21be-364c2398ce82" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:18.945 [INFO][5128] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" iface="eth0" netns="/var/run/netns/cni-15d0fe90-745f-7238-21be-364c2398ce82" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:18.945 [INFO][5128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:18.945 [INFO][5128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:19.010 [INFO][5139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:19.011 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:19.011 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:19.028 [WARNING][5139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:19.030 [INFO][5139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:19.037 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:19.051531 containerd[1456]: 2025-08-13 07:19:19.045 [INFO][5128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:19.054037 containerd[1456]: time="2025-08-13T07:19:19.053992302Z" level=info msg="TearDown network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" successfully" Aug 13 07:19:19.054107 containerd[1456]: time="2025-08-13T07:19:19.054028541Z" level=info msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" returns successfully" Aug 13 07:19:19.054836 containerd[1456]: time="2025-08-13T07:19:19.054789082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc56dc789-lw45n,Uid:b55bac42-942a-48b6-84f6-be639523c7be,Namespace:calico-system,Attempt:1,}" Aug 13 07:19:19.061834 systemd[1]: run-netns-cni\x2d15d0fe90\x2d745f\x2d7238\x2d21be\x2d364c2398ce82.mount: Deactivated successfully. Aug 13 07:19:19.070991 systemd[1]: Started cri-containerd-1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa.scope - libcontainer container 1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa. Aug 13 07:19:19.087377 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:19.114711 containerd[1456]: time="2025-08-13T07:19:19.114668908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lknln,Uid:c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa\"" Aug 13 07:19:19.282249 systemd-networkd[1393]: cali00b0e44d8aa: Gained IPv6LL Aug 13 07:19:19.398527 systemd-networkd[1393]: calic3eb0706b5d: Link UP Aug 13 07:19:19.398851 systemd-networkd[1393]: calic3eb0706b5d: Gained carrier Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.265 [INFO][5191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0 calico-kube-controllers-6bc56dc789- calico-system b55bac42-942a-48b6-84f6-be639523c7be 1172 0 2025-08-13 07:18:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bc56dc789 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bc56dc789-lw45n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic3eb0706b5d [] [] }} ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.265 [INFO][5191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.340 [INFO][5205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" HandleID="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.340 [INFO][5205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" HandleID="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003277d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bc56dc789-lw45n", "timestamp":"2025-08-13 07:19:19.340279447 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.340 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.340 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.340 [INFO][5205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.347 [INFO][5205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.353 [INFO][5205] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.357 [INFO][5205] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.359 [INFO][5205] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.360 [INFO][5205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.361 [INFO][5205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.362 [INFO][5205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.383 [INFO][5205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.391 [INFO][5205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.391 [INFO][5205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" host="localhost" Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.391 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:19.564099 containerd[1456]: 2025-08-13 07:19:19.391 [INFO][5205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" HandleID="k8s-pod-network.a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.566611 containerd[1456]: 2025-08-13 07:19:19.395 [INFO][5191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0", GenerateName:"calico-kube-controllers-6bc56dc789-", Namespace:"calico-system", SelfLink:"", UID:"b55bac42-942a-48b6-84f6-be639523c7be", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bc56dc789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bc56dc789-lw45n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3eb0706b5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:19.566611 containerd[1456]: 2025-08-13 07:19:19.395 [INFO][5191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.566611 containerd[1456]: 2025-08-13 07:19:19.395 [INFO][5191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3eb0706b5d ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.566611 containerd[1456]: 2025-08-13 07:19:19.398 [INFO][5191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.566611 containerd[1456]: 2025-08-13 07:19:19.399 [INFO][5191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0", GenerateName:"calico-kube-controllers-6bc56dc789-", Namespace:"calico-system", SelfLink:"", UID:"b55bac42-942a-48b6-84f6-be639523c7be", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bc56dc789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b", Pod:"calico-kube-controllers-6bc56dc789-lw45n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3eb0706b5d", MAC:"a6:b9:31:c9:77:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:19.566611 containerd[1456]: 2025-08-13 07:19:19.560 [INFO][5191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b" Namespace="calico-system" Pod="calico-kube-controllers-6bc56dc789-lw45n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:19.863189 containerd[1456]: time="2025-08-13T07:19:19.862947121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:19.863189 containerd[1456]: time="2025-08-13T07:19:19.863039927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:19.864431 containerd[1456]: time="2025-08-13T07:19:19.863431359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.864772 containerd[1456]: time="2025-08-13T07:19:19.864681937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.882021 containerd[1456]: time="2025-08-13T07:19:19.881958593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:19.885269 containerd[1456]: time="2025-08-13T07:19:19.885226753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:19:19.887095 containerd[1456]: time="2025-08-13T07:19:19.887067411Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:19.890526 containerd[1456]: time="2025-08-13T07:19:19.890467401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:19.891302 containerd[1456]: time="2025-08-13T07:19:19.891104688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.320413426s" Aug 13 07:19:19.891302 containerd[1456]: time="2025-08-13T07:19:19.891153220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:19:19.893699 containerd[1456]: time="2025-08-13T07:19:19.893440473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:19:19.898318 containerd[1456]: time="2025-08-13T07:19:19.898269800Z" level=info msg="CreateContainer within sandbox \"b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:19:19.904129 systemd[1]: Started cri-containerd-a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b.scope - libcontainer container a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b. Aug 13 07:19:19.922248 containerd[1456]: time="2025-08-13T07:19:19.922137993Z" level=info msg="CreateContainer within sandbox \"b8d2b5e1e14a35c9c8b7de676696ddc8f362b7ef9b0be9b7481e3213aac8e0e9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b039f3adc7f991b55ff0dc2ffd0e76f07f070ecd8103244cf5bcec71fc07ff2c\"" Aug 13 07:19:19.923108 containerd[1456]: time="2025-08-13T07:19:19.922743810Z" level=info msg="StartContainer for \"b039f3adc7f991b55ff0dc2ffd0e76f07f070ecd8103244cf5bcec71fc07ff2c\"" Aug 13 07:19:19.928946 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:19:19.963036 systemd[1]: Started cri-containerd-b039f3adc7f991b55ff0dc2ffd0e76f07f070ecd8103244cf5bcec71fc07ff2c.scope - libcontainer container b039f3adc7f991b55ff0dc2ffd0e76f07f070ecd8103244cf5bcec71fc07ff2c. Aug 13 07:19:19.969072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3313673167.mount: Deactivated successfully. Aug 13 07:19:19.972070 containerd[1456]: time="2025-08-13T07:19:19.970771796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc56dc789-lw45n,Uid:b55bac42-942a-48b6-84f6-be639523c7be,Namespace:calico-system,Attempt:1,} returns sandbox id \"a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b\"" Aug 13 07:19:20.018994 containerd[1456]: time="2025-08-13T07:19:20.018726239Z" level=info msg="StartContainer for \"b039f3adc7f991b55ff0dc2ffd0e76f07f070ecd8103244cf5bcec71fc07ff2c\" returns successfully" Aug 13 07:19:20.640485 kubelet[2512]: I0813 07:19:20.640404 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-75b95fc767-dstlv" podStartSLOduration=2.006527691 podStartE2EDuration="15.640378091s" podCreationTimestamp="2025-08-13 07:19:05 +0000 UTC" firstStartedPulling="2025-08-13 07:19:06.259218058 +0000 UTC m=+41.943928537" lastFinishedPulling="2025-08-13 07:19:19.893068458 +0000 UTC m=+55.577778937" observedRunningTime="2025-08-13 07:19:20.640140452 +0000 UTC m=+56.324850931" watchObservedRunningTime="2025-08-13 07:19:20.640378091 +0000 UTC m=+56.325088570" Aug 13 07:19:21.159203 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:54930.service - OpenSSH per-connection server daemon (10.0.0.1:54930). Aug 13 07:19:21.212403 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 54930 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:21.214570 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:21.218911 systemd-logind[1436]: New session 11 of user core. Aug 13 07:19:21.225964 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:19:21.265987 systemd-networkd[1393]: calic3eb0706b5d: Gained IPv6LL Aug 13 07:19:21.428998 sshd[5308]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:21.433192 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:54930.service: Deactivated successfully. Aug 13 07:19:21.436085 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:19:21.437272 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:19:21.456498 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:54940.service - OpenSSH per-connection server daemon (10.0.0.1:54940). Aug 13 07:19:21.460139 systemd-logind[1436]: Removed session 11. Aug 13 07:19:21.577575 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 54940 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:21.577456 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:21.583018 systemd-logind[1436]: New session 12 of user core. Aug 13 07:19:21.592963 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:19:21.853607 sshd[5327]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:21.863853 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:54940.service: Deactivated successfully. Aug 13 07:19:21.866712 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:19:21.869144 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:19:21.873212 systemd-logind[1436]: Removed session 12. Aug 13 07:19:21.878452 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:54948.service - OpenSSH per-connection server daemon (10.0.0.1:54948). Aug 13 07:19:21.925490 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 54948 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:21.927843 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:21.933800 systemd-logind[1436]: New session 13 of user core. Aug 13 07:19:21.941972 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:19:22.123546 sshd[5343]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:22.130722 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:54948.service: Deactivated successfully. Aug 13 07:19:22.133540 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:19:22.135553 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:19:22.136604 systemd-logind[1436]: Removed session 13. Aug 13 07:19:22.210455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438289814.mount: Deactivated successfully. Aug 13 07:19:23.051071 containerd[1456]: time="2025-08-13T07:19:23.051001009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.051956 containerd[1456]: time="2025-08-13T07:19:23.051859194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:19:23.053247 containerd[1456]: time="2025-08-13T07:19:23.053209862Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.065096 containerd[1456]: time="2025-08-13T07:19:23.065006319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:23.066157 containerd[1456]: time="2025-08-13T07:19:23.066114888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.172631504s" Aug 13 07:19:23.066223 containerd[1456]: time="2025-08-13T07:19:23.066154744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:19:23.067491 containerd[1456]: time="2025-08-13T07:19:23.067450877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:19:23.071798 containerd[1456]: time="2025-08-13T07:19:23.071764325Z" level=info msg="CreateContainer within sandbox \"1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:19:23.117743 containerd[1456]: time="2025-08-13T07:19:23.117685528Z" level=info msg="CreateContainer within sandbox \"1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3a42b7d50ecbfb3e18a06e198a6d74ff3fea635e603fcbe1d36687ed0e9ef325\"" Aug 13 07:19:23.118380 containerd[1456]: time="2025-08-13T07:19:23.118328896Z" level=info msg="StartContainer for \"3a42b7d50ecbfb3e18a06e198a6d74ff3fea635e603fcbe1d36687ed0e9ef325\"" Aug 13 07:19:23.189977 systemd[1]: Started cri-containerd-3a42b7d50ecbfb3e18a06e198a6d74ff3fea635e603fcbe1d36687ed0e9ef325.scope - libcontainer container 3a42b7d50ecbfb3e18a06e198a6d74ff3fea635e603fcbe1d36687ed0e9ef325. Aug 13 07:19:23.232325 containerd[1456]: time="2025-08-13T07:19:23.232272782Z" level=info msg="StartContainer for \"3a42b7d50ecbfb3e18a06e198a6d74ff3fea635e603fcbe1d36687ed0e9ef325\" returns successfully" Aug 13 07:19:23.629887 kubelet[2512]: I0813 07:19:23.628939 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-lknln" podStartSLOduration=36.677882136 podStartE2EDuration="40.62880287s" podCreationTimestamp="2025-08-13 07:18:43 +0000 UTC" firstStartedPulling="2025-08-13 07:19:19.11634858 +0000 UTC m=+54.801059059" lastFinishedPulling="2025-08-13 07:19:23.067269314 +0000 UTC m=+58.751979793" observedRunningTime="2025-08-13 07:19:23.628788463 +0000 UTC m=+59.313498942" watchObservedRunningTime="2025-08-13 07:19:23.62880287 +0000 UTC m=+59.313513349" Aug 13 07:19:24.483019 containerd[1456]: time="2025-08-13T07:19:24.482892406Z" level=info msg="StopPodSandbox for \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\"" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.526 [WARNING][5444] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"2740fd78-4ba0-40d0-9638-65458c5f2e1e", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52", Pod:"calico-apiserver-655dd967b8-nrt5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6043a71b0b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.526 [INFO][5444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.526 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" iface="eth0" netns="" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.526 [INFO][5444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.526 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.548 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.548 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.548 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.553 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.553 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.555 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:24.563854 containerd[1456]: 2025-08-13 07:19:24.559 [INFO][5444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.567042 containerd[1456]: time="2025-08-13T07:19:24.563881358Z" level=info msg="TearDown network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\" successfully" Aug 13 07:19:24.567042 containerd[1456]: time="2025-08-13T07:19:24.563906595Z" level=info msg="StopPodSandbox for \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\" returns successfully" Aug 13 07:19:24.614106 containerd[1456]: time="2025-08-13T07:19:24.614052400Z" level=info msg="RemovePodSandbox for \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\"" Aug 13 07:19:24.616468 containerd[1456]: time="2025-08-13T07:19:24.616416066Z" level=info msg="Forcibly stopping sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\"" Aug 13 07:19:24.641682 systemd[1]: run-containerd-runc-k8s.io-3a42b7d50ecbfb3e18a06e198a6d74ff3fea635e603fcbe1d36687ed0e9ef325-runc.Xg6O9C.mount: Deactivated successfully. Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.654 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"2740fd78-4ba0-40d0-9638-65458c5f2e1e", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88250d6e63ba59334c295631b5d6fd78b0f363a708e7a442428f351a08448a52", Pod:"calico-apiserver-655dd967b8-nrt5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6043a71b0b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.655 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.655 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" iface="eth0" netns="" Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.655 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.655 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.680 [INFO][5495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.680 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.680 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.686 [WARNING][5495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.686 [INFO][5495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" HandleID="k8s-pod-network.59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Workload="localhost-k8s-calico--apiserver--655dd967b8--nrt5s-eth0" Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.687 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:24.694020 containerd[1456]: 2025-08-13 07:19:24.690 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1" Aug 13 07:19:24.694020 containerd[1456]: time="2025-08-13T07:19:24.694022554Z" level=info msg="TearDown network for sandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\" successfully" Aug 13 07:19:24.856516 containerd[1456]: time="2025-08-13T07:19:24.856354108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:24.856516 containerd[1456]: time="2025-08-13T07:19:24.856463355Z" level=info msg="RemovePodSandbox \"59abdc8048f9d4d2a3b970eff4f866746247c057d407ca162b1afeb68b7948d1\" returns successfully" Aug 13 07:19:24.866672 containerd[1456]: time="2025-08-13T07:19:24.866631347Z" level=info msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\"" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.903 [WARNING][5519] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lknln-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa", Pod:"goldmane-768f4c5c69-lknln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00b0e44d8aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.904 [INFO][5519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.904 [INFO][5519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" iface="eth0" netns="" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.904 [INFO][5519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.904 [INFO][5519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.929 [INFO][5528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.929 [INFO][5528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.929 [INFO][5528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.937 [WARNING][5528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.938 [INFO][5528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.939 [INFO][5528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:24.946755 containerd[1456]: 2025-08-13 07:19:24.943 [INFO][5519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:24.947498 containerd[1456]: time="2025-08-13T07:19:24.946835587Z" level=info msg="TearDown network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" successfully" Aug 13 07:19:24.947498 containerd[1456]: time="2025-08-13T07:19:24.946871205Z" level=info msg="StopPodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" returns successfully" Aug 13 07:19:24.947561 containerd[1456]: time="2025-08-13T07:19:24.947497372Z" level=info msg="RemovePodSandbox for \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\"" Aug 13 07:19:24.947561 containerd[1456]: time="2025-08-13T07:19:24.947536387Z" level=info msg="Forcibly stopping sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\"" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:24.988 [WARNING][5546] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lknln-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"c1d1c5ee-dd0d-4857-8db1-ad1baffd1d4b", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d4ec84325042f68877375b3c7f7e48803ed13c3b43ce59a972ade3b6a190aaa", Pod:"goldmane-768f4c5c69-lknln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali00b0e44d8aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:24.988 [INFO][5546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:24.988 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" iface="eth0" netns="" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:24.988 [INFO][5546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:24.988 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:25.012 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:25.013 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:25.013 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:25.020 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:25.020 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" HandleID="k8s-pod-network.7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Workload="localhost-k8s-goldmane--768f4c5c69--lknln-eth0" Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:25.022 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:25.029418 containerd[1456]: 2025-08-13 07:19:25.026 [INFO][5546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024" Aug 13 07:19:25.030189 containerd[1456]: time="2025-08-13T07:19:25.030122060Z" level=info msg="TearDown network for sandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" successfully" Aug 13 07:19:25.035489 containerd[1456]: time="2025-08-13T07:19:25.035440028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:25.035688 containerd[1456]: time="2025-08-13T07:19:25.035521505Z" level=info msg="RemovePodSandbox \"7a795130fe4c16eb364daf07e173b486058f3c1b1269cdaa0aff3613c240c024\" returns successfully" Aug 13 07:19:25.036262 containerd[1456]: time="2025-08-13T07:19:25.036219770Z" level=info msg="StopPodSandbox for \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\"" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.222 [WARNING][5572] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fcjzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ec0a1a1-c8b0-4122-ab58-78229dc90d73", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210", Pod:"csi-node-driver-fcjzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73cffbc4c27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.222 [INFO][5572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.222 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" iface="eth0" netns="" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.222 [INFO][5572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.222 [INFO][5572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.243 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.243 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.244 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.250 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.250 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.252 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:25.257404 containerd[1456]: 2025-08-13 07:19:25.254 [INFO][5572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.257906 containerd[1456]: time="2025-08-13T07:19:25.257446184Z" level=info msg="TearDown network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\" successfully" Aug 13 07:19:25.257906 containerd[1456]: time="2025-08-13T07:19:25.257473256Z" level=info msg="StopPodSandbox for \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\" returns successfully" Aug 13 07:19:25.258047 containerd[1456]: time="2025-08-13T07:19:25.258011623Z" level=info msg="RemovePodSandbox for \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\"" Aug 13 07:19:25.258096 containerd[1456]: time="2025-08-13T07:19:25.258056991Z" level=info msg="Forcibly stopping sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\"" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.313 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fcjzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ec0a1a1-c8b0-4122-ab58-78229dc90d73", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ed666a9ef790da0577aee80dece5f6d12f11d7d6b7b15ef014e0576e295f210", Pod:"csi-node-driver-fcjzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73cffbc4c27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.313 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.313 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" iface="eth0" netns="" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.313 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.313 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.333 [INFO][5609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.333 [INFO][5609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.333 [INFO][5609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.338 [WARNING][5609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.338 [INFO][5609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" HandleID="k8s-pod-network.aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Workload="localhost-k8s-csi--node--driver--fcjzr-eth0" Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.339 [INFO][5609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:25.345245 containerd[1456]: 2025-08-13 07:19:25.342 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296" Aug 13 07:19:25.345737 containerd[1456]: time="2025-08-13T07:19:25.345280260Z" level=info msg="TearDown network for sandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\" successfully" Aug 13 07:19:25.349784 containerd[1456]: time="2025-08-13T07:19:25.349749572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:25.349847 containerd[1456]: time="2025-08-13T07:19:25.349807093Z" level=info msg="RemovePodSandbox \"aeac3e4bc23a5a2aa2613f4b928deae9a4d8d97887a3d5d18fe3bba45a313296\" returns successfully" Aug 13 07:19:25.350449 containerd[1456]: time="2025-08-13T07:19:25.350416397Z" level=info msg="StopPodSandbox for \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\"" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.383 [WARNING][5627] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" WorkloadEndpoint="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.383 [INFO][5627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.383 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" iface="eth0" netns="" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.383 [INFO][5627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.383 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.403 [INFO][5637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.404 [INFO][5637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.404 [INFO][5637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.411 [WARNING][5637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.411 [INFO][5637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.412 [INFO][5637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:25.418504 containerd[1456]: 2025-08-13 07:19:25.415 [INFO][5627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.419632 containerd[1456]: time="2025-08-13T07:19:25.418538359Z" level=info msg="TearDown network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\" successfully" Aug 13 07:19:25.419632 containerd[1456]: time="2025-08-13T07:19:25.418567594Z" level=info msg="StopPodSandbox for \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\" returns successfully" Aug 13 07:19:25.419632 containerd[1456]: time="2025-08-13T07:19:25.419161289Z" level=info msg="RemovePodSandbox for \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\"" Aug 13 07:19:25.419632 containerd[1456]: time="2025-08-13T07:19:25.419206546Z" level=info msg="Forcibly stopping sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\"" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.453 [WARNING][5656] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" WorkloadEndpoint="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.453 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.453 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" iface="eth0" netns="" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.453 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.453 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.472 [INFO][5665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.472 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.472 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.528 [WARNING][5665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.528 [INFO][5665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" HandleID="k8s-pod-network.eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Workload="localhost-k8s-whisker--745cfdf7c7--mzblt-eth0" Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.530 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:25.535805 containerd[1456]: 2025-08-13 07:19:25.532 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda" Aug 13 07:19:25.535805 containerd[1456]: time="2025-08-13T07:19:25.535779282Z" level=info msg="TearDown network for sandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\" successfully" Aug 13 07:19:25.598907 containerd[1456]: time="2025-08-13T07:19:25.598714087Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:25.598907 containerd[1456]: time="2025-08-13T07:19:25.598794453Z" level=info msg="RemovePodSandbox \"eba940ecccf9c2251c452296b3ba62dc026e9fb87778f08de30f4e3090eccfda\" returns successfully" Aug 13 07:19:25.599394 containerd[1456]: time="2025-08-13T07:19:25.599349512Z" level=info msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\"" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.643 [WARNING][5682] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0", GenerateName:"calico-kube-controllers-6bc56dc789-", Namespace:"calico-system", SelfLink:"", UID:"b55bac42-942a-48b6-84f6-be639523c7be", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bc56dc789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b", Pod:"calico-kube-controllers-6bc56dc789-lw45n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3eb0706b5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.643 [INFO][5682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.643 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" iface="eth0" netns="" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.643 [INFO][5682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.643 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.666 [INFO][5691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.666 [INFO][5691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.666 [INFO][5691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.672 [WARNING][5691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.672 [INFO][5691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.676 [INFO][5691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:25.684526 containerd[1456]: 2025-08-13 07:19:25.679 [INFO][5682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.684972 containerd[1456]: time="2025-08-13T07:19:25.684576173Z" level=info msg="TearDown network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" successfully" Aug 13 07:19:25.684972 containerd[1456]: time="2025-08-13T07:19:25.684616542Z" level=info msg="StopPodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" returns successfully" Aug 13 07:19:25.685170 containerd[1456]: time="2025-08-13T07:19:25.685133378Z" level=info msg="RemovePodSandbox for \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\"" Aug 13 07:19:25.685170 containerd[1456]: time="2025-08-13T07:19:25.685162063Z" level=info msg="Forcibly stopping sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\"" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.738 [WARNING][5709] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0", GenerateName:"calico-kube-controllers-6bc56dc789-", Namespace:"calico-system", SelfLink:"", UID:"b55bac42-942a-48b6-84f6-be639523c7be", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bc56dc789", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b", Pod:"calico-kube-controllers-6bc56dc789-lw45n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3eb0706b5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.738 [INFO][5709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.738 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" iface="eth0" netns="" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.738 [INFO][5709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.738 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.764 [INFO][5717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.764 [INFO][5717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.764 [INFO][5717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.770 [WARNING][5717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.770 [INFO][5717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" HandleID="k8s-pod-network.37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Workload="localhost-k8s-calico--kube--controllers--6bc56dc789--lw45n-eth0" Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.773 [INFO][5717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:25.780005 containerd[1456]: 2025-08-13 07:19:25.776 [INFO][5709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71" Aug 13 07:19:25.780472 containerd[1456]: time="2025-08-13T07:19:25.780047948Z" level=info msg="TearDown network for sandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" successfully" Aug 13 07:19:26.190832 containerd[1456]: time="2025-08-13T07:19:26.190750121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:26.191012 containerd[1456]: time="2025-08-13T07:19:26.190856707Z" level=info msg="RemovePodSandbox \"37e5d3d259a62f0ad4c3902c9422d3208721f3283758c9395874fb1b85d65e71\" returns successfully" Aug 13 07:19:26.191990 containerd[1456]: time="2025-08-13T07:19:26.191935394Z" level=info msg="StopPodSandbox for \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\"" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.228 [WARNING][5738] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c", Pod:"coredns-674b8bbfcf-xx8kw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib177ba84fa6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.228 [INFO][5738] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.228 [INFO][5738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" iface="eth0" netns="" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.228 [INFO][5738] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.228 [INFO][5738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.250 [INFO][5746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.250 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.251 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.256 [WARNING][5746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.256 [INFO][5746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.257 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:26.265389 containerd[1456]: 2025-08-13 07:19:26.261 [INFO][5738] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.265389 containerd[1456]: time="2025-08-13T07:19:26.265287611Z" level=info msg="TearDown network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\" successfully" Aug 13 07:19:26.265389 containerd[1456]: time="2025-08-13T07:19:26.265326707Z" level=info msg="StopPodSandbox for \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\" returns successfully" Aug 13 07:19:26.266616 containerd[1456]: time="2025-08-13T07:19:26.266034409Z" level=info msg="RemovePodSandbox for \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\"" Aug 13 07:19:26.266616 containerd[1456]: time="2025-08-13T07:19:26.266061602Z" level=info msg="Forcibly stopping sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\"" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.300 [WARNING][5763] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7664d1a0-e7f0-48d5-bd0d-61e02b72f59f", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ad477210328e7f275f81a819ed4140e2cee692d42c5c9e683a06e5c3fa6b22c", Pod:"coredns-674b8bbfcf-xx8kw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib177ba84fa6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.300 [INFO][5763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.300 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" iface="eth0" netns="" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.300 [INFO][5763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.300 [INFO][5763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.323 [INFO][5771] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.324 [INFO][5771] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.324 [INFO][5771] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.331 [WARNING][5771] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.331 [INFO][5771] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" HandleID="k8s-pod-network.e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Workload="localhost-k8s-coredns--674b8bbfcf--xx8kw-eth0" Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.332 [INFO][5771] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:26.338357 containerd[1456]: 2025-08-13 07:19:26.335 [INFO][5763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b" Aug 13 07:19:26.340001 containerd[1456]: time="2025-08-13T07:19:26.338924095Z" level=info msg="TearDown network for sandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\" successfully" Aug 13 07:19:26.432676 containerd[1456]: time="2025-08-13T07:19:26.432569016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:26.432849 containerd[1456]: time="2025-08-13T07:19:26.432711781Z" level=info msg="RemovePodSandbox \"e3fd2adac7ee071b176470337800d5266260d02e94ceaf28f21d85d7b625357b\" returns successfully" Aug 13 07:19:26.433272 containerd[1456]: time="2025-08-13T07:19:26.433212475Z" level=info msg="StopPodSandbox for \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\"" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.531 [WARNING][5789] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"62b0e9a2-2b8a-410c-bf54-6c522a15fa93", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996", Pod:"coredns-674b8bbfcf-sx5l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21447b7a22c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.531 [INFO][5789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.533 [INFO][5789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" iface="eth0" netns="" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.533 [INFO][5789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.533 [INFO][5789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.563 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.563 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.563 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.570 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.570 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.572 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:26.580150 containerd[1456]: 2025-08-13 07:19:26.576 [INFO][5789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.580150 containerd[1456]: time="2025-08-13T07:19:26.580122365Z" level=info msg="TearDown network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\" successfully" Aug 13 07:19:26.581054 containerd[1456]: time="2025-08-13T07:19:26.580152913Z" level=info msg="StopPodSandbox for \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\" returns successfully" Aug 13 07:19:26.581054 containerd[1456]: time="2025-08-13T07:19:26.580913508Z" level=info msg="RemovePodSandbox for \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\"" Aug 13 07:19:26.581054 containerd[1456]: time="2025-08-13T07:19:26.580941642Z" level=info msg="Forcibly stopping sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\"" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.623 [WARNING][5817] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"62b0e9a2-2b8a-410c-bf54-6c522a15fa93", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32df300d872296fd85363a9b412f800e997bf697d8b51f007ba7148d5f157996", Pod:"coredns-674b8bbfcf-sx5l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21447b7a22c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.623 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.623 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" iface="eth0" netns="" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.623 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.623 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.648 [INFO][5826] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.648 [INFO][5826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.648 [INFO][5826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.654 [WARNING][5826] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.654 [INFO][5826] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" HandleID="k8s-pod-network.888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Workload="localhost-k8s-coredns--674b8bbfcf--sx5l7-eth0" Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.656 [INFO][5826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:26.662209 containerd[1456]: 2025-08-13 07:19:26.659 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9" Aug 13 07:19:26.662209 containerd[1456]: time="2025-08-13T07:19:26.662191821Z" level=info msg="TearDown network for sandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\" successfully" Aug 13 07:19:26.667717 containerd[1456]: time="2025-08-13T07:19:26.667446531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:26.667717 containerd[1456]: time="2025-08-13T07:19:26.667546423Z" level=info msg="RemovePodSandbox \"888410e9b12db9fa5ac95e7885c66c0b0a010e54d86696a92f662f63994290f9\" returns successfully" Aug 13 07:19:26.668149 containerd[1456]: time="2025-08-13T07:19:26.668111341Z" level=info msg="StopPodSandbox for \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\"" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.710 [WARNING][5844] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"bab077f6-800e-450e-ac7f-4fa8a8599eca", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b", Pod:"calico-apiserver-655dd967b8-5xw68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e3ee6909a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.710 [INFO][5844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.710 [INFO][5844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" iface="eth0" netns="" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.710 [INFO][5844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.710 [INFO][5844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.736 [INFO][5853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.736 [INFO][5853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.736 [INFO][5853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.744 [WARNING][5853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.744 [INFO][5853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.745 [INFO][5853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:26.753748 containerd[1456]: 2025-08-13 07:19:26.749 [INFO][5844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:26.754403 containerd[1456]: time="2025-08-13T07:19:26.753802082Z" level=info msg="TearDown network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\" successfully" Aug 13 07:19:26.754403 containerd[1456]: time="2025-08-13T07:19:26.753860134Z" level=info msg="StopPodSandbox for \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\" returns successfully" Aug 13 07:19:26.754646 containerd[1456]: time="2025-08-13T07:19:26.754583698Z" level=info msg="RemovePodSandbox for \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\"" Aug 13 07:19:26.754646 containerd[1456]: time="2025-08-13T07:19:26.754625818Z" level=info msg="Forcibly stopping sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\"" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.971 [WARNING][5869] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0", GenerateName:"calico-apiserver-655dd967b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"bab077f6-800e-450e-ac7f-4fa8a8599eca", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655dd967b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95ec9488de732111f9b00b1d22c333b05b7def5ae6a1df2ddb1db48b076e271b", Pod:"calico-apiserver-655dd967b8-5xw68", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e3ee6909a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.971 [INFO][5869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.971 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" iface="eth0" netns="" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.971 [INFO][5869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.971 [INFO][5869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.994 [INFO][5877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.994 [INFO][5877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:26.994 [INFO][5877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:27.001 [WARNING][5877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:27.001 [INFO][5877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" HandleID="k8s-pod-network.31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Workload="localhost-k8s-calico--apiserver--655dd967b8--5xw68-eth0" Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:27.004 [INFO][5877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:27.010281 containerd[1456]: 2025-08-13 07:19:27.007 [INFO][5869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec" Aug 13 07:19:27.010764 containerd[1456]: time="2025-08-13T07:19:27.010333323Z" level=info msg="TearDown network for sandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\" successfully" Aug 13 07:19:27.136125 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:54962.service - OpenSSH per-connection server daemon (10.0.0.1:54962). Aug 13 07:19:27.188428 sshd[5885]: Accepted publickey for core from 10.0.0.1 port 54962 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:27.190385 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:27.195254 systemd-logind[1436]: New session 14 of user core. Aug 13 07:19:27.205041 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:19:27.403534 kubelet[2512]: I0813 07:19:27.396411 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:19:27.446763 containerd[1456]: time="2025-08-13T07:19:27.446701680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:27.447303 containerd[1456]: time="2025-08-13T07:19:27.446796062Z" level=info msg="RemovePodSandbox \"31027e56ca677ed36b69418984cfa33733c0aaa84e5b4fac70b4a7159916d9ec\" returns successfully" Aug 13 07:19:27.449308 containerd[1456]: time="2025-08-13T07:19:27.448058521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:27.451644 containerd[1456]: time="2025-08-13T07:19:27.451435382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:19:27.455461 containerd[1456]: time="2025-08-13T07:19:27.455420363Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:27.461968 containerd[1456]: time="2025-08-13T07:19:27.461923411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:27.462861 containerd[1456]: time="2025-08-13T07:19:27.462574515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.395074715s" Aug 13 07:19:27.462861 containerd[1456]: time="2025-08-13T07:19:27.462608340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:19:27.492603 containerd[1456]: time="2025-08-13T07:19:27.492552334Z" level=info msg="CreateContainer within sandbox \"a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:19:27.504727 containerd[1456]: time="2025-08-13T07:19:27.504677475Z" level=info msg="CreateContainer within sandbox \"a7bc528f7ff3ed037cbd33013d2e41deae2c06257517a33c16f2b857691fd28b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a09b507f2c8b0e2ab1b002535074f90a68b76483ed6093294b1065fe8e8d27ec\"" Aug 13 07:19:27.505382 containerd[1456]: time="2025-08-13T07:19:27.505335552Z" level=info msg="StartContainer for \"a09b507f2c8b0e2ab1b002535074f90a68b76483ed6093294b1065fe8e8d27ec\"" Aug 13 07:19:27.568949 systemd[1]: Started cri-containerd-a09b507f2c8b0e2ab1b002535074f90a68b76483ed6093294b1065fe8e8d27ec.scope - libcontainer container a09b507f2c8b0e2ab1b002535074f90a68b76483ed6093294b1065fe8e8d27ec. Aug 13 07:19:27.605344 sshd[5885]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:27.610398 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:54962.service: Deactivated successfully. Aug 13 07:19:27.613610 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:19:27.614704 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:19:27.616094 systemd-logind[1436]: Removed session 14. Aug 13 07:19:27.617620 containerd[1456]: time="2025-08-13T07:19:27.617357979Z" level=info msg="StartContainer for \"a09b507f2c8b0e2ab1b002535074f90a68b76483ed6093294b1065fe8e8d27ec\" returns successfully" Aug 13 07:19:27.661416 kubelet[2512]: I0813 07:19:27.659947 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bc56dc789-lw45n" podStartSLOduration=37.170705762 podStartE2EDuration="44.659924161s" podCreationTimestamp="2025-08-13 07:18:43 +0000 UTC" firstStartedPulling="2025-08-13 07:19:19.974484749 +0000 UTC m=+55.659195228" lastFinishedPulling="2025-08-13 07:19:27.463703148 +0000 UTC m=+63.148413627" observedRunningTime="2025-08-13 07:19:27.659346699 +0000 UTC m=+63.344057198" watchObservedRunningTime="2025-08-13 07:19:27.659924161 +0000 UTC m=+63.344634650" Aug 13 07:19:32.619621 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:35270.service - OpenSSH per-connection server daemon (10.0.0.1:35270). Aug 13 07:19:33.207585 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 35270 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:33.209579 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:33.214269 systemd-logind[1436]: New session 15 of user core. Aug 13 07:19:33.228961 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:19:33.411609 sshd[5986]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:33.418726 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:35270.service: Deactivated successfully. Aug 13 07:19:33.421161 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:19:33.421799 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:19:33.422720 systemd-logind[1436]: Removed session 15. Aug 13 07:19:36.583022 systemd[1]: run-containerd-runc-k8s.io-7ddea1a0b005a35b287e537906edb896e15e0f20f9eea53108fa796ade398e17-runc.Pw3nks.mount: Deactivated successfully. Aug 13 07:19:38.423921 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:60108.service - OpenSSH per-connection server daemon (10.0.0.1:60108). Aug 13 07:19:38.461500 sshd[6024]: Accepted publickey for core from 10.0.0.1 port 60108 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:38.463196 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:38.467400 systemd-logind[1436]: New session 16 of user core. Aug 13 07:19:38.471960 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:19:38.588912 sshd[6024]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:38.594243 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:60108.service: Deactivated successfully. Aug 13 07:19:38.596317 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:19:38.597154 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:19:38.598126 systemd-logind[1436]: Removed session 16. Aug 13 07:19:43.612211 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:60118.service - OpenSSH per-connection server daemon (10.0.0.1:60118). Aug 13 07:19:43.639159 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 60118 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:43.641095 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:43.645683 systemd-logind[1436]: New session 17 of user core. Aug 13 07:19:43.657049 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:19:43.837633 sshd[6041]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:43.850126 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:60118.service: Deactivated successfully. Aug 13 07:19:43.852236 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:19:43.853073 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:19:43.860077 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:60128.service - OpenSSH per-connection server daemon (10.0.0.1:60128). Aug 13 07:19:43.862597 systemd-logind[1436]: Removed session 17. Aug 13 07:19:43.899803 sshd[6056]: Accepted publickey for core from 10.0.0.1 port 60128 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:43.901429 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:43.905875 systemd-logind[1436]: New session 18 of user core. Aug 13 07:19:43.913954 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:19:44.242074 sshd[6056]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:44.251773 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:60128.service: Deactivated successfully. Aug 13 07:19:44.254258 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:19:44.256208 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:19:44.263277 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:60140.service - OpenSSH per-connection server daemon (10.0.0.1:60140). Aug 13 07:19:44.264480 systemd-logind[1436]: Removed session 18. Aug 13 07:19:44.302081 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 60140 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:44.303861 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:44.310186 systemd-logind[1436]: New session 19 of user core. Aug 13 07:19:44.315107 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:19:44.405278 kubelet[2512]: E0813 07:19:44.405023 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:45.028996 sshd[6069]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:45.042574 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:60140.service: Deactivated successfully. Aug 13 07:19:45.046762 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:19:45.051007 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:19:45.059217 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:60152.service - OpenSSH per-connection server daemon (10.0.0.1:60152). Aug 13 07:19:45.060935 systemd-logind[1436]: Removed session 19. Aug 13 07:19:45.105617 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 60152 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:45.107640 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:45.112219 systemd-logind[1436]: New session 20 of user core. Aug 13 07:19:45.122060 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:19:45.488398 sshd[6089]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:45.499747 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:60152.service: Deactivated successfully. Aug 13 07:19:45.502384 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:19:45.504867 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:19:45.514205 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:60154.service - OpenSSH per-connection server daemon (10.0.0.1:60154). Aug 13 07:19:45.515332 systemd-logind[1436]: Removed session 20. Aug 13 07:19:45.547277 sshd[6101]: Accepted publickey for core from 10.0.0.1 port 60154 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:45.549295 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:45.554014 systemd-logind[1436]: New session 21 of user core. Aug 13 07:19:45.563945 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:19:45.688095 sshd[6101]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:45.693433 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:60154.service: Deactivated successfully. Aug 13 07:19:45.696126 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:19:45.697002 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:19:45.698369 systemd-logind[1436]: Removed session 21. Aug 13 07:19:46.400439 kubelet[2512]: E0813 07:19:46.400392 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:50.699444 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:37150.service - OpenSSH per-connection server daemon (10.0.0.1:37150). Aug 13 07:19:50.730320 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 37150 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:50.731966 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:50.735672 systemd-logind[1436]: New session 22 of user core. Aug 13 07:19:50.745122 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:19:50.880744 sshd[6144]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:50.886944 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:37150.service: Deactivated successfully. Aug 13 07:19:50.889553 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:19:50.890387 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:19:50.891527 systemd-logind[1436]: Removed session 22. Aug 13 07:19:53.403864 kubelet[2512]: E0813 07:19:53.403800 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:55.892375 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:37160.service - OpenSSH per-connection server daemon (10.0.0.1:37160). Aug 13 07:19:55.933516 sshd[6203]: Accepted publickey for core from 10.0.0.1 port 37160 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:19:55.935205 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:55.939362 systemd-logind[1436]: New session 23 of user core. Aug 13 07:19:55.948945 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:19:56.066361 sshd[6203]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:56.070841 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:37160.service: Deactivated successfully. Aug 13 07:19:56.072881 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:19:56.073475 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:19:56.074309 systemd-logind[1436]: Removed session 23. Aug 13 07:19:58.399561 kubelet[2512]: E0813 07:19:58.399519 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:20:01.083007 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:55558.service - OpenSSH per-connection server daemon (10.0.0.1:55558). Aug 13 07:20:01.125185 sshd[6238]: Accepted publickey for core from 10.0.0.1 port 55558 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:20:01.126889 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:01.130882 systemd-logind[1436]: New session 24 of user core. Aug 13 07:20:01.140954 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:20:01.266802 sshd[6238]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:01.270779 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:55558.service: Deactivated successfully. Aug 13 07:20:01.272740 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:20:01.273420 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:20:01.274270 systemd-logind[1436]: Removed session 24.