Aug 13 07:16:36.918497 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:16:36.918520 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:16:36.918531 kernel: BIOS-provided physical RAM map: Aug 13 07:16:36.918545 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:16:36.918551 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:16:36.918558 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:16:36.918566 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 07:16:36.918572 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 07:16:36.918578 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:16:36.918587 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 07:16:36.918593 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:16:36.918599 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:16:36.918609 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 07:16:36.918615 kernel: NX (Execute Disable) protection: active Aug 13 07:16:36.918623 kernel: APIC: Static calls initialized Aug 13 07:16:36.918635 kernel: SMBIOS 2.8 present. Aug 13 07:16:36.918642 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 07:16:36.918648 kernel: Hypervisor detected: KVM Aug 13 07:16:36.918655 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:16:36.918662 kernel: kvm-clock: using sched offset of 2703221310 cycles Aug 13 07:16:36.918669 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:16:36.918676 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:16:36.918683 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:16:36.918691 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:16:36.918700 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 07:16:36.918707 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:16:36.918714 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:16:36.918721 kernel: Using GB pages for direct mapping Aug 13 07:16:36.918728 kernel: ACPI: Early table checksum verification disabled Aug 13 07:16:36.918735 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 07:16:36.918747 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:36.918762 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:36.918770 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:36.918780 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 07:16:36.918793 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:36.918800 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:36.918807 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:36.918816 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:16:36.918823 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 07:16:36.918830 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 07:16:36.918841 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 07:16:36.918851 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 07:16:36.918858 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 07:16:36.918865 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 07:16:36.918885 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 07:16:36.918895 kernel: No NUMA configuration found Aug 13 07:16:36.918902 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 07:16:36.918912 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 13 07:16:36.918920 kernel: Zone ranges: Aug 13 07:16:36.918927 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:16:36.918934 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 07:16:36.918941 kernel: Normal empty Aug 13 07:16:36.918948 kernel: Movable zone start for each node Aug 13 07:16:36.918955 kernel: Early memory node ranges Aug 13 07:16:36.918963 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:16:36.918970 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 07:16:36.918977 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 07:16:36.918987 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:16:36.918997 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:16:36.919004 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 07:16:36.919011 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:16:36.919018 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:16:36.919025 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:16:36.919032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:16:36.919040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:16:36.919047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:16:36.919056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:16:36.919064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:16:36.919071 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:16:36.919078 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:16:36.919085 kernel: TSC deadline timer available Aug 13 07:16:36.919092 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:16:36.919099 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:16:36.919106 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:16:36.919116 kernel: kvm-guest: setup PV sched yield Aug 13 07:16:36.919125 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 07:16:36.919133 kernel: Booting paravirtualized kernel on KVM Aug 13 07:16:36.919140 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:16:36.919148 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:16:36.919155 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:16:36.919162 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:16:36.919169 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:16:36.919176 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:16:36.919183 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:16:36.919194 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:16:36.919202 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:16:36.919209 kernel: random: crng init done Aug 13 07:16:36.919216 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:16:36.919223 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:16:36.919230 kernel: Fallback order for Node 0: 0 Aug 13 07:16:36.919238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 13 07:16:36.919245 kernel: Policy zone: DMA32 Aug 13 07:16:36.919254 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:16:36.919262 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136900K reserved, 0K cma-reserved) Aug 13 07:16:36.919269 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:16:36.919276 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:16:36.919283 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:16:36.919291 kernel: Dynamic Preempt: voluntary Aug 13 07:16:36.919298 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:16:36.919305 kernel: rcu: RCU event tracing is enabled. Aug 13 07:16:36.919313 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:16:36.919323 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:16:36.919330 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:16:36.919337 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:16:36.919344 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:16:36.919354 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:16:36.919361 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:16:36.919368 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:16:36.919376 kernel: Console: colour VGA+ 80x25 Aug 13 07:16:36.919383 kernel: printk: console [ttyS0] enabled Aug 13 07:16:36.919392 kernel: ACPI: Core revision 20230628 Aug 13 07:16:36.919400 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:16:36.919407 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:16:36.919414 kernel: x2apic enabled Aug 13 07:16:36.919421 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:16:36.919428 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:16:36.919436 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:16:36.919451 kernel: kvm-guest: setup PV IPIs Aug 13 07:16:36.919478 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:16:36.919485 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:16:36.919493 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:16:36.919500 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:16:36.919510 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:16:36.919518 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:16:36.919525 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:16:36.919539 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:16:36.919547 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:16:36.919557 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:16:36.919565 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:16:36.919575 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:16:36.919582 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:16:36.919590 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:16:36.919598 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:16:36.919605 kernel: x86/bugs: return thunk changed Aug 13 07:16:36.919613 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:16:36.919623 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:16:36.919630 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:16:36.919638 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:16:36.919645 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:16:36.919653 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:16:36.919660 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:16:36.919668 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:16:36.919675 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:16:36.919683 kernel: landlock: Up and running. Aug 13 07:16:36.919693 kernel: SELinux: Initializing. Aug 13 07:16:36.919700 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:16:36.919708 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:16:36.919716 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:16:36.919723 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:16:36.919731 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:16:36.919738 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:16:36.919746 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:16:36.919756 kernel: ... version: 0 Aug 13 07:16:36.919766 kernel: ... bit width: 48 Aug 13 07:16:36.919773 kernel: ... generic registers: 6 Aug 13 07:16:36.919780 kernel: ... value mask: 0000ffffffffffff Aug 13 07:16:36.919788 kernel: ... max period: 00007fffffffffff Aug 13 07:16:36.919795 kernel: ... fixed-purpose events: 0 Aug 13 07:16:36.919803 kernel: ... event mask: 000000000000003f Aug 13 07:16:36.919810 kernel: signal: max sigframe size: 1776 Aug 13 07:16:36.919817 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:16:36.919825 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:16:36.919835 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:16:36.919851 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:16:36.919859 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:16:36.919866 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:16:36.919899 kernel: smpboot: Max logical packages: 1 Aug 13 07:16:36.919907 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:16:36.919915 kernel: devtmpfs: initialized Aug 13 07:16:36.919922 kernel: x86/mm: Memory block size: 128MB Aug 13 07:16:36.919930 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:16:36.919941 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:16:36.919949 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:16:36.919956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:16:36.919964 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:16:36.919971 kernel: audit: type=2000 audit(1755069396.080:1): state=initialized audit_enabled=0 res=1 Aug 13 07:16:36.919979 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:16:36.919986 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:16:36.919994 kernel: cpuidle: using governor menu Aug 13 07:16:36.920001 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:16:36.920011 kernel: dca service started, version 1.12.1 Aug 13 07:16:36.920019 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:16:36.920026 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:16:36.920034 kernel: PCI: Using configuration type 1 for base access Aug 13 07:16:36.920041 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:16:36.920049 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:16:36.920056 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:16:36.920066 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:16:36.920074 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:16:36.920084 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:16:36.920091 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:16:36.920098 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:16:36.920106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:16:36.920113 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:16:36.920121 kernel: ACPI: Interpreter enabled Aug 13 07:16:36.920128 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:16:36.920135 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:16:36.920143 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:16:36.920153 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:16:36.920160 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:16:36.920168 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:16:36.920382 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:16:36.920559 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:16:36.920805 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:16:36.920825 kernel: PCI host bridge to bus 0000:00 Aug 13 07:16:36.921065 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:16:36.921188 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:16:36.921304 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:16:36.921487 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:16:36.921633 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:16:36.921809 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 07:16:36.921947 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:16:36.922169 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:16:36.922363 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:16:36.922508 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 07:16:36.922646 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 07:16:36.922772 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 07:16:36.922943 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:16:36.923144 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:16:36.923331 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 07:16:36.923542 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 07:16:36.923695 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 07:16:36.923855 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:16:36.924090 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:16:36.924267 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 07:16:36.924401 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 07:16:36.924574 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:16:36.924708 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 13 07:16:36.924857 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 13 07:16:36.925031 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 07:16:36.925163 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 07:16:36.925328 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:16:36.925540 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:16:36.925722 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:16:36.925921 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 13 07:16:36.926105 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 13 07:16:36.926286 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:16:36.926453 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 07:16:36.926465 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:16:36.926478 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:16:36.926485 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:16:36.926493 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:16:36.926501 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:16:36.926508 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:16:36.926516 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:16:36.926524 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:16:36.926540 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:16:36.926550 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:16:36.926561 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:16:36.926571 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:16:36.926578 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:16:36.926586 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:16:36.926593 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:16:36.926601 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:16:36.926609 kernel: iommu: Default domain type: Translated Aug 13 07:16:36.926616 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:16:36.926624 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:16:36.926634 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:16:36.926641 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:16:36.926649 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 07:16:36.926797 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:16:36.927009 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:16:36.927139 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:16:36.927149 kernel: vgaarb: loaded Aug 13 07:16:36.927157 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:16:36.927181 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:16:36.927197 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:16:36.927206 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:16:36.927214 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:16:36.927224 kernel: pnp: PnP ACPI init Aug 13 07:16:36.927420 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:16:36.927433 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:16:36.927441 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:16:36.927453 kernel: NET: Registered PF_INET protocol family Aug 13 07:16:36.927461 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:16:36.927468 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:16:36.927476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:16:36.927484 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:16:36.927491 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:16:36.927499 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:16:36.927506 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:16:36.927514 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:16:36.927524 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:16:36.927540 kernel: NET: Registered PF_XDP protocol family Aug 13 07:16:36.927699 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:16:36.927840 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:16:36.928010 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:16:36.928146 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:16:36.928298 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:16:36.928433 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 07:16:36.928450 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:16:36.928458 kernel: Initialise system trusted keyrings Aug 13 07:16:36.928466 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:16:36.928473 kernel: Key type asymmetric registered Aug 13 07:16:36.928483 kernel: Asymmetric key parser 'x509' registered Aug 13 07:16:36.928491 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:16:36.928498 kernel: io scheduler mq-deadline registered Aug 13 07:16:36.928506 kernel: io scheduler kyber registered Aug 13 07:16:36.928513 kernel: io scheduler bfq registered Aug 13 07:16:36.928521 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:16:36.928540 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:16:36.928548 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:16:36.928555 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:16:36.928563 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:16:36.928571 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:16:36.928579 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:16:36.928586 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:16:36.928593 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:16:36.928736 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:16:36.928752 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:16:36.929006 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:16:36.929133 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:16:36 UTC (1755069396) Aug 13 07:16:36.929250 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:16:36.929260 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:16:36.929268 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:16:36.929276 kernel: Segment Routing with IPv6 Aug 13 07:16:36.929289 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:16:36.929296 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:16:36.929304 kernel: Key type dns_resolver registered Aug 13 07:16:36.929311 kernel: IPI shorthand broadcast: enabled Aug 13 07:16:36.929319 kernel: sched_clock: Marking stable (759005280, 103710507)->(929451832, -66736045) Aug 13 07:16:36.929338 kernel: registered taskstats version 1 Aug 13 07:16:36.929354 kernel: Loading compiled-in X.509 certificates Aug 13 07:16:36.929363 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:16:36.929370 kernel: Key type .fscrypt registered Aug 13 07:16:36.929381 kernel: Key type fscrypt-provisioning registered Aug 13 07:16:36.929388 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:16:36.929396 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:16:36.929403 kernel: ima: No architecture policies found Aug 13 07:16:36.929411 kernel: clk: Disabling unused clocks Aug 13 07:16:36.929418 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:16:36.929426 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:16:36.929433 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:16:36.929441 kernel: Run /init as init process Aug 13 07:16:36.929451 kernel: with arguments: Aug 13 07:16:36.929459 kernel: /init Aug 13 07:16:36.929466 kernel: with environment: Aug 13 07:16:36.929474 kernel: HOME=/ Aug 13 07:16:36.929489 kernel: TERM=linux Aug 13 07:16:36.929497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:16:36.929507 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:16:36.929517 systemd[1]: Detected virtualization kvm. Aug 13 07:16:36.929538 systemd[1]: Detected architecture x86-64. Aug 13 07:16:36.929546 systemd[1]: Running in initrd. Aug 13 07:16:36.929554 systemd[1]: No hostname configured, using default hostname. Aug 13 07:16:36.929561 systemd[1]: Hostname set to . Aug 13 07:16:36.929571 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:16:36.929579 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:16:36.929587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:36.929595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:36.929607 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:16:36.929615 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:16:36.929635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:16:36.929646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:16:36.929658 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:16:36.929670 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:16:36.929678 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:36.929686 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:36.929695 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:16:36.929703 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:16:36.929711 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:16:36.929719 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:16:36.929728 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:16:36.929738 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:16:36.929746 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:16:36.929755 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:16:36.929763 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:36.929772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:36.929780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:36.929788 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:16:36.929796 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:16:36.929807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:16:36.929818 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:16:36.929826 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:16:36.929835 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:16:36.929843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:16:36.929851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:36.929860 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:16:36.929868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:36.929901 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:16:36.929944 systemd-journald[193]: Collecting audit messages is disabled. Aug 13 07:16:36.929993 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:16:36.930011 systemd-journald[193]: Journal started Aug 13 07:16:36.930079 systemd-journald[193]: Runtime Journal (/run/log/journal/6346e8775b1b4e3993a1037568df5125) is 6.0M, max 48.4M, 42.3M free. Aug 13 07:16:36.931980 systemd-modules-load[194]: Inserted module 'overlay' Aug 13 07:16:36.956188 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:16:36.956844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:36.959470 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:16:36.965903 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:16:36.967692 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 13 07:16:36.968729 kernel: Bridge firewalling registered Aug 13 07:16:36.973301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:16:36.974929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:16:36.978590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:16:36.979739 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:36.983692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:16:37.104945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:37.111018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:37.114405 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:16:37.116082 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:37.116987 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:37.119855 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:16:37.138185 dracut-cmdline[226]: dracut-dracut-053 Aug 13 07:16:37.141465 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:16:37.154903 systemd-resolved[229]: Positive Trust Anchors: Aug 13 07:16:37.154921 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:16:37.154955 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:16:37.157805 systemd-resolved[229]: Defaulting to hostname 'linux'. Aug 13 07:16:37.159086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:16:37.164291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:37.245926 kernel: SCSI subsystem initialized Aug 13 07:16:37.254903 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:16:37.265915 kernel: iscsi: registered transport (tcp) Aug 13 07:16:37.286918 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:16:37.286981 kernel: QLogic iSCSI HBA Driver Aug 13 07:16:37.347222 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:16:37.355085 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:16:37.380071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:16:37.380101 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:16:37.381075 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:16:37.423904 kernel: raid6: avx2x4 gen() 29887 MB/s Aug 13 07:16:37.440894 kernel: raid6: avx2x2 gen() 29982 MB/s Aug 13 07:16:37.457959 kernel: raid6: avx2x1 gen() 25453 MB/s Aug 13 07:16:37.457986 kernel: raid6: using algorithm avx2x2 gen() 29982 MB/s Aug 13 07:16:37.475941 kernel: raid6: .... xor() 19792 MB/s, rmw enabled Aug 13 07:16:37.475977 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:16:37.496903 kernel: xor: automatically using best checksumming function avx Aug 13 07:16:37.651922 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:16:37.670571 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:16:37.683077 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:37.696696 systemd-udevd[413]: Using default interface naming scheme 'v255'. Aug 13 07:16:37.701501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:37.709031 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:16:37.727562 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Aug 13 07:16:37.765259 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:16:37.783119 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:16:37.853667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:37.866305 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:16:37.878777 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:16:37.881614 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:16:37.885174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:37.887591 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:16:37.892986 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:16:37.895515 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:16:37.897105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:16:37.908943 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:16:37.908978 kernel: GPT:9289727 != 19775487 Aug 13 07:16:37.908989 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:16:37.908999 kernel: GPT:9289727 != 19775487 Aug 13 07:16:37.909009 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:16:37.909022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:37.910423 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:16:37.912241 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:16:37.928946 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:16:37.928977 kernel: AES CTR mode by8 optimization enabled Aug 13 07:16:37.932899 kernel: libata version 3.00 loaded. Aug 13 07:16:37.935515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:16:37.936811 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:37.941243 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:16:38.034885 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:16:38.037017 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:16:38.037031 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Aug 13 07:16:38.034795 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:16:38.035017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:38.043573 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (468) Aug 13 07:16:38.043590 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:16:38.038187 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:38.046479 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:16:38.049830 kernel: scsi host0: ahci Aug 13 07:16:38.048761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:38.054041 kernel: scsi host1: ahci Aug 13 07:16:38.057913 kernel: scsi host2: ahci Aug 13 07:16:38.061999 kernel: scsi host3: ahci Aug 13 07:16:38.062232 kernel: scsi host4: ahci Aug 13 07:16:38.066895 kernel: scsi host5: ahci Aug 13 07:16:38.067101 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 13 07:16:38.067114 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 13 07:16:38.067124 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 13 07:16:38.067134 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 13 07:16:38.068175 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 13 07:16:38.068196 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 13 07:16:38.073193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:16:38.103913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:38.115051 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:16:38.120070 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:16:38.121282 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:16:38.129032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:16:38.140082 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:16:38.143215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:16:38.165212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:38.291866 disk-uuid[565]: Primary Header is updated. Aug 13 07:16:38.291866 disk-uuid[565]: Secondary Entries is updated. Aug 13 07:16:38.291866 disk-uuid[565]: Secondary Header is updated. Aug 13 07:16:38.295993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:38.301914 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:38.377902 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:38.377970 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:38.379381 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:38.379901 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:38.380920 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:16:38.381901 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:38.381915 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:16:38.383276 kernel: ata3.00: applying bridge limits Aug 13 07:16:38.383334 kernel: ata3.00: configured for UDMA/100 Aug 13 07:16:38.383912 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:16:38.425916 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:16:38.426178 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:16:38.439901 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:16:39.301901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:39.302330 disk-uuid[575]: The operation has completed successfully. Aug 13 07:16:39.332062 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:16:39.332210 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:16:39.358291 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:16:39.364116 sh[590]: Success Aug 13 07:16:39.377898 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:16:39.414970 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:16:39.424641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:16:39.428228 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:16:39.441341 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:16:39.441380 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:39.441391 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:16:39.442320 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:16:39.443023 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:16:39.448470 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:16:39.451439 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:16:39.468059 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:16:39.471270 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:16:39.480410 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:39.480448 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:39.480473 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:39.484142 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:39.495187 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:16:39.497388 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:39.508205 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:16:39.515085 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:16:39.684467 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:16:39.736322 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:16:39.747482 ignition[676]: Ignition 2.19.0 Aug 13 07:16:39.747497 ignition[676]: Stage: fetch-offline Aug 13 07:16:39.747546 ignition[676]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:39.747560 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:39.747706 ignition[676]: parsed url from cmdline: "" Aug 13 07:16:39.747711 ignition[676]: no config URL provided Aug 13 07:16:39.747717 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:16:39.747729 ignition[676]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:16:39.747764 ignition[676]: op(1): [started] loading QEMU firmware config module Aug 13 07:16:39.747769 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:16:39.756382 ignition[676]: op(1): [finished] loading QEMU firmware config module Aug 13 07:16:39.775034 systemd-networkd[777]: lo: Link UP Aug 13 07:16:39.775045 systemd-networkd[777]: lo: Gained carrier Aug 13 07:16:39.778254 systemd-networkd[777]: Enumeration completed Aug 13 07:16:39.779148 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:16:39.781682 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:39.781690 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:16:39.782595 systemd-networkd[777]: eth0: Link UP Aug 13 07:16:39.782599 systemd-networkd[777]: eth0: Gained carrier Aug 13 07:16:39.782606 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:39.788052 systemd[1]: Reached target network.target - Network. Aug 13 07:16:39.797977 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:16:39.808416 ignition[676]: parsing config with SHA512: 373fa8a843eb867389d26b248dd41eb56473065d9ca1563988e797d1152a7b297b3a10ee39866e9ec0f698c877e47e52ff947f9d68c3c370d570f1940726d6e2 Aug 13 07:16:39.814967 unknown[676]: fetched base config from "system" Aug 13 07:16:39.814981 unknown[676]: fetched user config from "qemu" Aug 13 07:16:39.819087 ignition[676]: fetch-offline: fetch-offline passed Aug 13 07:16:39.819229 ignition[676]: Ignition finished successfully Aug 13 07:16:39.821950 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:16:39.824344 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:16:39.837009 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:16:39.858365 ignition[783]: Ignition 2.19.0 Aug 13 07:16:39.858376 ignition[783]: Stage: kargs Aug 13 07:16:39.858595 ignition[783]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:39.858608 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:39.859549 ignition[783]: kargs: kargs passed Aug 13 07:16:39.859601 ignition[783]: Ignition finished successfully Aug 13 07:16:39.864097 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:16:39.875180 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:16:39.895054 ignition[790]: Ignition 2.19.0 Aug 13 07:16:39.895078 ignition[790]: Stage: disks Aug 13 07:16:39.895315 ignition[790]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:39.895332 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:39.896430 ignition[790]: disks: disks passed Aug 13 07:16:39.896491 ignition[790]: Ignition finished successfully Aug 13 07:16:39.902109 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:16:39.904168 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:16:39.904442 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:16:39.906709 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:16:39.907180 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:16:39.907506 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:16:39.918035 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:16:39.931996 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:16:39.938544 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:16:39.954058 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:16:40.048919 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:16:40.049994 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:16:40.052619 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:16:40.066109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:16:40.069579 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:16:40.072612 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:16:40.072690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:16:40.082912 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Aug 13 07:16:40.082943 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:40.082959 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:40.082974 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:40.074734 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:16:40.085898 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:40.086193 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:16:40.090016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:16:40.103125 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:16:40.141321 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:16:40.147134 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:16:40.153693 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:16:40.158781 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:16:40.435976 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:16:40.441957 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:16:40.443964 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:16:40.456195 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:16:40.457974 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:40.475700 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:16:40.694983 ignition[925]: INFO : Ignition 2.19.0 Aug 13 07:16:40.694983 ignition[925]: INFO : Stage: mount Aug 13 07:16:40.697056 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:40.697056 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:40.699610 ignition[925]: INFO : mount: mount passed Aug 13 07:16:40.700361 ignition[925]: INFO : Ignition finished successfully Aug 13 07:16:40.703394 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:16:40.716004 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:16:40.724981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:16:40.760911 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (934) Aug 13 07:16:40.760983 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:40.762566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:40.762603 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:40.765901 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:40.767754 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:16:40.798121 ignition[951]: INFO : Ignition 2.19.0 Aug 13 07:16:40.798121 ignition[951]: INFO : Stage: files Aug 13 07:16:40.799820 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:40.799820 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:40.802317 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:16:40.803638 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:16:40.803638 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:16:40.808357 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:16:40.809805 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:16:40.811473 unknown[951]: wrote ssh authorized keys file for user: core Aug 13 07:16:40.812559 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:16:40.814752 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:16:40.816687 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:16:40.816687 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:16:40.816687 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:16:40.852026 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:16:40.921695 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:16:40.921695 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:16:40.925632 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:16:40.927251 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:16:40.929161 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:16:40.930783 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:16:40.932616 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:16:40.934252 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:16:40.936185 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:16:40.938361 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:16:40.940390 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:16:40.940390 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:40.940390 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:40.940390 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:40.940390 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:16:41.226718 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:16:41.705057 systemd-networkd[777]: eth0: Gained IPv6LL Aug 13 07:16:41.980258 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:41.980258 ignition[951]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 13 07:16:41.984190 ignition[951]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:16:42.014898 ignition[951]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:16:42.019851 ignition[951]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:16:42.021603 ignition[951]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:16:42.021603 ignition[951]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:16:42.021603 ignition[951]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:16:42.021603 ignition[951]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:16:42.021603 ignition[951]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:16:42.021603 ignition[951]: INFO : files: files passed Aug 13 07:16:42.021603 ignition[951]: INFO : Ignition finished successfully Aug 13 07:16:42.023637 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:16:42.035025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:16:42.036937 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:16:42.039331 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:16:42.039463 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:16:42.050509 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:16:42.053830 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:42.053830 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:42.058287 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:42.056573 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:16:42.058866 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:16:42.069034 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:16:42.097209 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:16:42.097347 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:16:42.098299 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:16:42.100934 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:16:42.101315 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:16:42.106372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:16:42.138182 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:16:42.152019 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:16:42.161644 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:42.163883 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:42.166113 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:16:42.167834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:16:42.168801 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:16:42.171226 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:16:42.173180 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:16:42.174908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:16:42.177020 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:16:42.179274 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:16:42.181491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:16:42.183574 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:16:42.186002 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:16:42.188052 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:16:42.190082 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:16:42.191685 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:16:42.192680 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:16:42.194981 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:42.197091 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:42.199517 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:16:42.200549 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:42.203454 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:16:42.204495 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:16:42.206703 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:16:42.207759 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:16:42.210043 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:16:42.211744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:16:42.212830 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:42.215442 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:16:42.217188 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:16:42.218991 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:16:42.219838 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:16:42.221711 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:16:42.222604 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:16:42.224568 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:16:42.225700 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:16:42.228128 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:16:42.229090 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:16:42.246083 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:16:42.247972 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:16:42.249112 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:42.252601 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:16:42.254487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:16:42.255737 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:42.258413 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:16:42.259239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:16:42.264848 ignition[1005]: INFO : Ignition 2.19.0 Aug 13 07:16:42.264848 ignition[1005]: INFO : Stage: umount Aug 13 07:16:42.266370 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:42.266370 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:42.266370 ignition[1005]: INFO : umount: umount passed Aug 13 07:16:42.266370 ignition[1005]: INFO : Ignition finished successfully Aug 13 07:16:42.265801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:16:42.265943 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:16:42.269397 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:16:42.269527 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:16:42.270855 systemd[1]: Stopped target network.target - Network. Aug 13 07:16:42.273601 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:16:42.273667 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:16:42.274348 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:16:42.274406 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:16:42.274672 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:16:42.274727 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:16:42.275304 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:16:42.275357 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:16:42.275769 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:16:42.281861 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:16:42.287942 systemd-networkd[777]: eth0: DHCPv6 lease lost Aug 13 07:16:42.289135 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:16:42.289273 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:16:42.291595 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:16:42.291787 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:16:42.294600 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:16:42.294660 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:42.306008 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:16:42.306431 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:16:42.306490 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:16:42.306789 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:16:42.306836 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:42.307260 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:16:42.307309 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:42.307625 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:16:42.307669 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:42.314256 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:42.324034 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:16:42.324178 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:16:42.333819 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:16:42.335757 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:16:42.335967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:42.338109 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:16:42.338160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:42.338436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:16:42.338477 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:42.338726 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:16:42.338773 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:16:42.339537 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:16:42.339585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:16:42.346888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:16:42.346943 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:42.356023 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:16:42.356253 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:16:42.356309 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:42.359526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:16:42.359583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:42.364413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:16:42.364533 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:16:42.694493 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:16:42.694716 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:16:42.695526 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:16:42.697627 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:16:42.697699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:16:42.711164 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:16:42.721760 systemd[1]: Switching root. Aug 13 07:16:42.755821 systemd-journald[193]: Journal stopped Aug 13 07:16:44.406910 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 13 07:16:44.406988 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:16:44.407031 kernel: SELinux: policy capability open_perms=1 Aug 13 07:16:44.407047 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:16:44.407058 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:16:44.407076 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:16:44.407094 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:16:44.407106 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:16:44.407124 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:16:44.407135 kernel: audit: type=1403 audit(1755069403.599:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:16:44.407148 systemd[1]: Successfully loaded SELinux policy in 51.297ms. Aug 13 07:16:44.407176 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.294ms. Aug 13 07:16:44.407194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:16:44.407214 systemd[1]: Detected virtualization kvm. Aug 13 07:16:44.407226 systemd[1]: Detected architecture x86-64. Aug 13 07:16:44.407239 systemd[1]: Detected first boot. Aug 13 07:16:44.407251 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:16:44.407263 zram_generator::config[1071]: No configuration found. Aug 13 07:16:44.407276 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:16:44.407291 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:16:44.407303 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:16:44.407316 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:16:44.407337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:16:44.407349 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:16:44.407361 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:16:44.407373 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:16:44.407385 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:16:44.407397 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:16:44.407416 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:16:44.407428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:44.407440 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:44.407452 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:16:44.407464 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:16:44.407477 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:16:44.407490 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:16:44.407502 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:16:44.407523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:44.407535 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:16:44.407547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:44.407560 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:16:44.407572 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:16:44.407584 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:16:44.407596 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:16:44.407608 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:16:44.407623 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:16:44.407635 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:16:44.407647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:44.407662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:44.407674 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:44.407686 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:16:44.407698 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:16:44.407710 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:16:44.407722 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:16:44.407734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:44.407750 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:16:44.407768 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:16:44.407785 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:16:44.407802 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:16:44.407815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:44.407828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:16:44.407844 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:16:44.407859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:44.407892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:16:44.407905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:44.407921 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:16:44.407944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:44.407961 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:16:44.408014 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:16:44.408049 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:16:44.408072 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:16:44.408102 kernel: fuse: init (API version 7.39) Aug 13 07:16:44.408137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:16:44.408150 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:16:44.408162 kernel: loop: module loaded Aug 13 07:16:44.408174 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:16:44.408186 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:16:44.408206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:44.408218 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:16:44.408252 systemd-journald[1159]: Collecting audit messages is disabled. Aug 13 07:16:44.408288 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:16:44.408300 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:16:44.408313 systemd-journald[1159]: Journal started Aug 13 07:16:44.408344 systemd-journald[1159]: Runtime Journal (/run/log/journal/6346e8775b1b4e3993a1037568df5125) is 6.0M, max 48.4M, 42.3M free. Aug 13 07:16:44.409976 kernel: ACPI: bus type drm_connector registered Aug 13 07:16:44.412901 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:16:44.426952 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:16:44.428511 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:16:44.429906 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:16:44.431402 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:16:44.433031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:44.434570 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:16:44.434806 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:16:44.436314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:44.436549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:44.438001 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:16:44.438232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:16:44.439635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:44.439861 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:44.441485 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:16:44.441750 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:16:44.443179 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:44.443421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:44.445130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:44.446635 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:16:44.448292 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:16:44.465126 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:16:44.471955 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:16:44.474310 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:16:44.475437 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:16:44.479059 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:16:44.483141 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:16:44.484996 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:16:44.493192 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:16:44.494547 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:16:44.497447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:16:44.498559 systemd-journald[1159]: Time spent on flushing to /var/log/journal/6346e8775b1b4e3993a1037568df5125 is 12.960ms for 937 entries. Aug 13 07:16:44.498559 systemd-journald[1159]: System Journal (/var/log/journal/6346e8775b1b4e3993a1037568df5125) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:16:44.531197 systemd-journald[1159]: Received client request to flush runtime journal. Aug 13 07:16:44.511078 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:16:44.517394 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:16:44.518795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:16:44.520444 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:16:44.525090 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:16:44.534426 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:16:44.546676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:44.548415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:44.559736 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Aug 13 07:16:44.559757 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Aug 13 07:16:44.560067 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:16:44.566781 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:16:44.572867 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:16:44.574680 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:16:44.616404 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:16:44.626060 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:16:44.647174 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Aug 13 07:16:44.647197 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Aug 13 07:16:44.653233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:45.165611 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:16:45.176030 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:45.201782 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Aug 13 07:16:45.217758 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:45.226080 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:16:45.243371 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:16:45.263676 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:16:45.352639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1246) Aug 13 07:16:45.355888 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:16:45.360427 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:16:45.377425 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:16:45.385908 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:16:45.387906 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:16:45.405121 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:16:45.405365 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:16:45.412718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:16:45.449179 systemd-networkd[1237]: lo: Link UP Aug 13 07:16:45.449600 systemd-networkd[1237]: lo: Gained carrier Aug 13 07:16:45.451425 systemd-networkd[1237]: Enumeration completed Aug 13 07:16:45.451928 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:45.451987 systemd-networkd[1237]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:16:45.452740 systemd-networkd[1237]: eth0: Link UP Aug 13 07:16:45.452799 systemd-networkd[1237]: eth0: Gained carrier Aug 13 07:16:45.452847 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:45.460147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:45.463191 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:16:45.468130 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:16:45.578428 systemd-networkd[1237]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:16:45.594985 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:16:45.608155 kernel: kvm_amd: TSC scaling supported Aug 13 07:16:45.608243 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:16:45.608257 kernel: kvm_amd: Nested Paging enabled Aug 13 07:16:45.608269 kernel: kvm_amd: LBR virtualization supported Aug 13 07:16:45.609351 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:16:45.609377 kernel: kvm_amd: Virtual GIF supported Aug 13 07:16:45.633953 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:16:45.676921 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:16:45.679305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:45.696238 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:16:45.707093 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:16:45.746046 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:16:45.747764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:45.767043 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:16:45.774563 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:16:45.812059 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:16:45.813510 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:16:45.814778 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:16:45.814808 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:16:45.815850 systemd[1]: Reached target machines.target - Containers. Aug 13 07:16:45.818043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:16:45.838136 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:16:45.841381 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:16:45.842654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:45.843980 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:16:45.846366 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:16:45.852220 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:16:45.854789 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:16:45.865501 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:16:45.869914 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 07:16:45.877690 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:16:45.878548 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:16:45.887085 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:16:45.914907 kernel: loop1: detected capacity change from 0 to 140768 Aug 13 07:16:45.954912 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:16:46.074915 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 07:16:46.087914 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:16:46.098908 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:16:46.108224 (sd-merge)[1303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:16:46.109003 (sd-merge)[1303]: Merged extensions into '/usr'. Aug 13 07:16:46.113285 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:16:46.113303 systemd[1]: Reloading... Aug 13 07:16:46.186909 zram_generator::config[1331]: No configuration found. Aug 13 07:16:46.290376 ldconfig[1288]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:16:46.359255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:46.428179 systemd[1]: Reloading finished in 314 ms. Aug 13 07:16:46.449092 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:16:46.450732 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:16:46.469099 systemd[1]: Starting ensure-sysext.service... Aug 13 07:16:46.471658 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:16:46.477507 systemd[1]: Reloading requested from client PID 1375 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:16:46.477525 systemd[1]: Reloading... Aug 13 07:16:46.507930 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:16:46.508337 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:16:46.509362 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:16:46.509678 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Aug 13 07:16:46.509764 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Aug 13 07:16:46.515712 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:16:46.515726 systemd-tmpfiles[1376]: Skipping /boot Aug 13 07:16:46.530230 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:16:46.530411 systemd-tmpfiles[1376]: Skipping /boot Aug 13 07:16:46.561908 zram_generator::config[1413]: No configuration found. Aug 13 07:16:46.678997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:46.751666 systemd[1]: Reloading finished in 273 ms. Aug 13 07:16:46.770432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:46.785791 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:46.788637 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:16:46.791118 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:16:46.796246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:16:46.800289 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:16:46.807516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:46.807692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:46.811077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:46.814657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:46.820162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:46.821349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:46.821453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:46.826784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:46.827157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:46.837580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:46.837986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:46.840276 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:46.840670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:46.850905 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:16:46.855481 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:16:46.856434 augenrules[1482]: No rules Aug 13 07:16:46.858487 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:46.863018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:46.863549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:46.869078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:46.871660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:46.874711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:46.875978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:46.880137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:16:46.881171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:46.884798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:46.885078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:46.887940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:46.888167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:46.894196 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:46.895043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:46.902109 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:16:46.905285 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:16:46.908469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:46.908732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:46.912353 systemd-resolved[1453]: Positive Trust Anchors: Aug 13 07:16:46.912370 systemd-resolved[1453]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:16:46.912402 systemd-resolved[1453]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:16:46.916742 systemd-resolved[1453]: Defaulting to hostname 'linux'. Aug 13 07:16:46.917186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:46.920062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:16:46.922236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:46.926053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:46.927320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:46.927570 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:16:46.927658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:46.928140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:16:46.930690 systemd[1]: Finished ensure-sysext.service. Aug 13 07:16:46.931904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:46.932129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:46.933655 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:16:46.933887 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:16:46.935247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:46.947041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:46.947054 systemd-networkd[1237]: eth0: Gained IPv6LL Aug 13 07:16:46.958330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:16:46.960023 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:46.960248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:46.976656 systemd[1]: Reached target network.target - Network. Aug 13 07:16:46.977635 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:16:46.979079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:46.980602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:16:46.980766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:16:46.994165 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:16:47.087695 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:16:47.088839 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:16:47.088918 systemd-timesyncd[1523]: Initial clock synchronization to Wed 2025-08-13 07:16:47.416903 UTC. Aug 13 07:16:47.090095 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:16:47.091538 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:16:47.092989 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:16:47.094422 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:16:47.095936 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:16:47.096004 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:16:47.097067 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:16:47.098421 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:16:47.099862 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:16:47.101224 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:16:47.103790 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:16:47.108165 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:16:47.112269 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:16:47.115434 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:16:47.116648 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:16:47.117688 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:16:47.118955 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:16:47.119005 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:47.119044 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:47.120674 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:16:47.123708 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:16:47.127025 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:16:47.131980 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:16:47.138157 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:16:47.139308 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:16:47.140984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:47.144051 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:16:47.146087 jq[1530]: false Aug 13 07:16:47.153349 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:16:47.159357 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:16:47.169938 extend-filesystems[1533]: Found loop3 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found loop4 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found loop5 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found sr0 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda1 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda2 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda3 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found usr Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda4 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda6 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda7 Aug 13 07:16:47.178008 extend-filesystems[1533]: Found vda9 Aug 13 07:16:47.178008 extend-filesystems[1533]: Checking size of /dev/vda9 Aug 13 07:16:47.170832 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:16:47.188271 extend-filesystems[1533]: Resized partition /dev/vda9 Aug 13 07:16:47.186431 dbus-daemon[1529]: [system] SELinux support is enabled Aug 13 07:16:47.197328 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:16:47.197360 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1239) Aug 13 07:16:47.197176 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:16:47.197423 extend-filesystems[1553]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:16:47.210055 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:16:47.212551 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:16:47.222515 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:16:47.225133 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:16:47.227864 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:16:47.230837 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:16:47.253489 update_engine[1557]: I20250813 07:16:47.246624 1557 main.cc:92] Flatcar Update Engine starting Aug 13 07:16:47.253489 update_engine[1557]: I20250813 07:16:47.248119 1557 update_check_scheduler.cc:74] Next update check in 11m40s Aug 13 07:16:47.240412 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:16:47.253896 jq[1562]: true Aug 13 07:16:47.240790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:16:47.244270 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:16:47.244649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:16:47.248257 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:16:47.248690 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:16:47.260409 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:16:47.265451 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:16:47.265451 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:16:47.265451 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:16:47.267635 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:16:47.284302 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Aug 13 07:16:47.288413 jq[1572]: true Aug 13 07:16:47.268011 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:16:47.280716 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:16:47.281120 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:16:47.304374 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:16:47.312270 tar[1570]: linux-amd64/helm Aug 13 07:16:47.328124 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:16:47.329537 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:16:47.329863 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:16:47.329905 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:16:47.331415 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:16:47.331434 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:16:47.333692 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:16:47.336706 systemd-logind[1556]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:16:47.671182 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:16:47.336736 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:16:47.429011 systemd-logind[1556]: New seat seat0. Aug 13 07:16:47.651268 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:16:47.656313 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:16:47.658001 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:16:47.667977 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:16:47.795176 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:16:47.854707 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:16:47.860505 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:16:47.869024 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:16:47.880008 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:16:47.880393 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:16:47.902578 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:16:47.941470 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:16:47.948320 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:16:47.965252 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:16:47.967629 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:16:48.122622 containerd[1574]: time="2025-08-13T07:16:48.122481209Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:16:48.169680 containerd[1574]: time="2025-08-13T07:16:48.169567631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:48.173379 containerd[1574]: time="2025-08-13T07:16:48.172905343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:48.173379 containerd[1574]: time="2025-08-13T07:16:48.172965534Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:16:48.173379 containerd[1574]: time="2025-08-13T07:16:48.173002706Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:16:48.173379 containerd[1574]: time="2025-08-13T07:16:48.173221774Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:16:48.173379 containerd[1574]: time="2025-08-13T07:16:48.173255692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:48.173379 containerd[1574]: time="2025-08-13T07:16:48.173333279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:48.173379 containerd[1574]: time="2025-08-13T07:16:48.173345618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174329 containerd[1574]: time="2025-08-13T07:16:48.173729217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174329 containerd[1574]: time="2025-08-13T07:16:48.173763729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174329 containerd[1574]: time="2025-08-13T07:16:48.173784005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174329 containerd[1574]: time="2025-08-13T07:16:48.173794581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174329 containerd[1574]: time="2025-08-13T07:16:48.173945322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174329 containerd[1574]: time="2025-08-13T07:16:48.174254140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174557 containerd[1574]: time="2025-08-13T07:16:48.174466418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:48.174557 containerd[1574]: time="2025-08-13T07:16:48.174484660Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:16:48.174626 containerd[1574]: time="2025-08-13T07:16:48.174590147Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:16:48.174678 containerd[1574]: time="2025-08-13T07:16:48.174656512Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:16:48.180647 containerd[1574]: time="2025-08-13T07:16:48.180588478Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:16:48.180647 containerd[1574]: time="2025-08-13T07:16:48.180658672Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:16:48.180784 containerd[1574]: time="2025-08-13T07:16:48.180682242Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:16:48.180784 containerd[1574]: time="2025-08-13T07:16:48.180702425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:16:48.180784 containerd[1574]: time="2025-08-13T07:16:48.180726642Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:16:48.181199 containerd[1574]: time="2025-08-13T07:16:48.180949977Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:16:48.181714 containerd[1574]: time="2025-08-13T07:16:48.181677530Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:16:48.181905 containerd[1574]: time="2025-08-13T07:16:48.181838671Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:16:48.182227 containerd[1574]: time="2025-08-13T07:16:48.182192639Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:16:48.182227 containerd[1574]: time="2025-08-13T07:16:48.182226911Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:16:48.182322 containerd[1574]: time="2025-08-13T07:16:48.182241607Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182322 containerd[1574]: time="2025-08-13T07:16:48.182265554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182322 containerd[1574]: time="2025-08-13T07:16:48.182280500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182322 containerd[1574]: time="2025-08-13T07:16:48.182294361Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182322 containerd[1574]: time="2025-08-13T07:16:48.182309120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182322 containerd[1574]: time="2025-08-13T07:16:48.182323773Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182336978Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182351944Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182389836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182405689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182418142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182430658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182442892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182458475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182470865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182487960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.182497 containerd[1574]: time="2025-08-13T07:16:48.182501195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182517204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182532662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182544719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182564066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182586490Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182614755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182627355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182638139Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182697005Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182717123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182728451Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182741113Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:16:48.183159 containerd[1574]: time="2025-08-13T07:16:48.182751647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183543 containerd[1574]: time="2025-08-13T07:16:48.182771151Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:16:48.183543 containerd[1574]: time="2025-08-13T07:16:48.182787128Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:16:48.183543 containerd[1574]: time="2025-08-13T07:16:48.182799362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:16:48.183659 containerd[1574]: time="2025-08-13T07:16:48.183209152Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:16:48.183659 containerd[1574]: time="2025-08-13T07:16:48.183298191Z" level=info msg="Connect containerd service" Aug 13 07:16:48.183659 containerd[1574]: time="2025-08-13T07:16:48.183348734Z" level=info msg="using legacy CRI server" Aug 13 07:16:48.183659 containerd[1574]: time="2025-08-13T07:16:48.183359935Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:16:48.183659 containerd[1574]: time="2025-08-13T07:16:48.183486908Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:16:48.184545 containerd[1574]: time="2025-08-13T07:16:48.184420033Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:16:48.185176 containerd[1574]: time="2025-08-13T07:16:48.184746529Z" level=info msg="Start subscribing containerd event" Aug 13 07:16:48.185176 containerd[1574]: time="2025-08-13T07:16:48.184947043Z" level=info msg="Start recovering state" Aug 13 07:16:48.185176 containerd[1574]: time="2025-08-13T07:16:48.184987636Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:16:48.185616 containerd[1574]: time="2025-08-13T07:16:48.185273277Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:16:48.186075 containerd[1574]: time="2025-08-13T07:16:48.185786383Z" level=info msg="Start event monitor" Aug 13 07:16:48.186075 containerd[1574]: time="2025-08-13T07:16:48.185862312Z" level=info msg="Start snapshots syncer" Aug 13 07:16:48.186075 containerd[1574]: time="2025-08-13T07:16:48.185928917Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:16:48.186075 containerd[1574]: time="2025-08-13T07:16:48.185946251Z" level=info msg="Start streaming server" Aug 13 07:16:48.187663 containerd[1574]: time="2025-08-13T07:16:48.186395663Z" level=info msg="containerd successfully booted in 0.067680s" Aug 13 07:16:48.186965 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:16:48.306906 tar[1570]: linux-amd64/LICENSE Aug 13 07:16:48.307195 tar[1570]: linux-amd64/README.md Aug 13 07:16:48.322989 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:16:48.746534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:48.748367 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:16:48.749699 systemd[1]: Startup finished in 7.789s (kernel) + 5.190s (userspace) = 12.980s. Aug 13 07:16:48.773468 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:49.188952 kubelet[1662]: E0813 07:16:49.188703 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:49.192811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:49.193196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:50.081242 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:16:50.089280 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:44496.service - OpenSSH per-connection server daemon (10.0.0.1:44496). Aug 13 07:16:50.139132 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 44496 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:50.141286 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:50.151244 systemd-logind[1556]: New session 1 of user core. Aug 13 07:16:50.152451 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:16:50.162139 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:16:50.174979 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:16:50.186217 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:16:50.189641 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:16:50.308565 systemd[1681]: Queued start job for default target default.target. Aug 13 07:16:50.309023 systemd[1681]: Created slice app.slice - User Application Slice. Aug 13 07:16:50.309049 systemd[1681]: Reached target paths.target - Paths. Aug 13 07:16:50.309063 systemd[1681]: Reached target timers.target - Timers. Aug 13 07:16:50.317017 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:16:50.324046 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:16:50.324108 systemd[1681]: Reached target sockets.target - Sockets. Aug 13 07:16:50.324122 systemd[1681]: Reached target basic.target - Basic System. Aug 13 07:16:50.324161 systemd[1681]: Reached target default.target - Main User Target. Aug 13 07:16:50.324205 systemd[1681]: Startup finished in 126ms. Aug 13 07:16:50.324944 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:16:50.326728 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:16:50.392243 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:44502.service - OpenSSH per-connection server daemon (10.0.0.1:44502). Aug 13 07:16:50.427194 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 44502 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:50.428690 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:50.432939 systemd-logind[1556]: New session 2 of user core. Aug 13 07:16:50.441185 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:16:50.496522 sshd[1693]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:50.510151 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:44508.service - OpenSSH per-connection server daemon (10.0.0.1:44508). Aug 13 07:16:50.510630 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:44502.service: Deactivated successfully. Aug 13 07:16:50.513384 systemd-logind[1556]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:16:50.514411 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:16:50.515523 systemd-logind[1556]: Removed session 2. Aug 13 07:16:50.544161 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 44508 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:50.545668 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:50.549861 systemd-logind[1556]: New session 3 of user core. Aug 13 07:16:50.565207 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:16:50.616077 sshd[1698]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:50.626128 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:44520.service - OpenSSH per-connection server daemon (10.0.0.1:44520). Aug 13 07:16:50.626619 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:44508.service: Deactivated successfully. Aug 13 07:16:50.629239 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:16:50.630280 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:16:50.631438 systemd-logind[1556]: Removed session 3. Aug 13 07:16:50.659473 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 44520 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:50.661072 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:50.665229 systemd-logind[1556]: New session 4 of user core. Aug 13 07:16:50.675186 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:16:50.730608 sshd[1706]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:50.739193 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:44530.service - OpenSSH per-connection server daemon (10.0.0.1:44530). Aug 13 07:16:50.739707 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:44520.service: Deactivated successfully. Aug 13 07:16:50.742225 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:16:50.743367 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:16:50.744630 systemd-logind[1556]: Removed session 4. Aug 13 07:16:50.772975 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 44530 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:50.774610 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:50.778978 systemd-logind[1556]: New session 5 of user core. Aug 13 07:16:50.789183 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:16:50.850308 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:16:50.850691 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:50.875707 sudo[1721]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:50.878134 sshd[1714]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:50.886171 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Aug 13 07:16:50.886801 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:44530.service: Deactivated successfully. Aug 13 07:16:50.890241 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:16:50.891411 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:16:50.892675 systemd-logind[1556]: Removed session 5. Aug 13 07:16:50.921184 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:50.923238 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:50.928167 systemd-logind[1556]: New session 6 of user core. Aug 13 07:16:50.939438 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:16:50.998021 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:16:50.998507 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:51.003165 sudo[1731]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:51.010328 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:16:51.010696 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:51.031118 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:51.033353 auditctl[1734]: No rules Aug 13 07:16:51.034812 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:16:51.035209 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:51.037355 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:51.075025 augenrules[1753]: No rules Aug 13 07:16:51.076140 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:51.077606 sudo[1730]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:51.079749 sshd[1723]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:51.088260 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:44536.service - OpenSSH per-connection server daemon (10.0.0.1:44536). Aug 13 07:16:51.088919 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:44534.service: Deactivated successfully. Aug 13 07:16:51.091406 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:16:51.092069 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:16:51.093834 systemd-logind[1556]: Removed session 6. Aug 13 07:16:51.125189 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 44536 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:16:51.127341 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:51.132005 systemd-logind[1556]: New session 7 of user core. Aug 13 07:16:51.148298 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:16:51.206402 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:16:51.206788 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:51.512180 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:16:51.512564 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:16:51.813352 dockerd[1784]: time="2025-08-13T07:16:51.813179815Z" level=info msg="Starting up" Aug 13 07:16:52.954288 dockerd[1784]: time="2025-08-13T07:16:52.954102410Z" level=info msg="Loading containers: start." Aug 13 07:16:53.199915 kernel: Initializing XFRM netlink socket Aug 13 07:16:53.291230 systemd-networkd[1237]: docker0: Link UP Aug 13 07:16:53.315591 dockerd[1784]: time="2025-08-13T07:16:53.315524060Z" level=info msg="Loading containers: done." Aug 13 07:16:53.335284 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2042709907-merged.mount: Deactivated successfully. Aug 13 07:16:53.336166 dockerd[1784]: time="2025-08-13T07:16:53.336130708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:16:53.336311 dockerd[1784]: time="2025-08-13T07:16:53.336274546Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:16:53.336440 dockerd[1784]: time="2025-08-13T07:16:53.336407838Z" level=info msg="Daemon has completed initialization" Aug 13 07:16:53.394888 dockerd[1784]: time="2025-08-13T07:16:53.394731503Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:16:53.395149 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:16:54.207964 containerd[1574]: time="2025-08-13T07:16:54.207869966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:16:55.095239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251942248.mount: Deactivated successfully. Aug 13 07:16:57.854807 containerd[1574]: time="2025-08-13T07:16:57.854688484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:57.855586 containerd[1574]: time="2025-08-13T07:16:57.855512440Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:16:57.856926 containerd[1574]: time="2025-08-13T07:16:57.856886690Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:57.860802 containerd[1574]: time="2025-08-13T07:16:57.860763783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:57.862382 containerd[1574]: time="2025-08-13T07:16:57.862328892Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 3.654355383s" Aug 13 07:16:57.862420 containerd[1574]: time="2025-08-13T07:16:57.862395873Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:16:57.863687 containerd[1574]: time="2025-08-13T07:16:57.863626183Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:16:59.443369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:16:59.452022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:59.677758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:59.683945 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:59.848117 containerd[1574]: time="2025-08-13T07:16:59.847914139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:59.849866 containerd[1574]: time="2025-08-13T07:16:59.849781740Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:16:59.851248 containerd[1574]: time="2025-08-13T07:16:59.851199214Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:59.854932 containerd[1574]: time="2025-08-13T07:16:59.854857709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:59.857367 containerd[1574]: time="2025-08-13T07:16:59.857306993Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.993603462s" Aug 13 07:16:59.857484 containerd[1574]: time="2025-08-13T07:16:59.857371486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:16:59.858187 containerd[1574]: time="2025-08-13T07:16:59.858068299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:16:59.922559 kubelet[2004]: E0813 07:16:59.922468 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:59.929345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:59.929698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:17:01.658753 containerd[1574]: time="2025-08-13T07:17:01.658662552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:01.771157 containerd[1574]: time="2025-08-13T07:17:01.771033175Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:17:01.858675 containerd[1574]: time="2025-08-13T07:17:01.858613067Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:01.967218 containerd[1574]: time="2025-08-13T07:17:01.966986509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:01.968199 containerd[1574]: time="2025-08-13T07:17:01.968148271Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 2.109891348s" Aug 13 07:17:01.968266 containerd[1574]: time="2025-08-13T07:17:01.968203945Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:17:01.968898 containerd[1574]: time="2025-08-13T07:17:01.968835555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:17:03.540117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379946830.mount: Deactivated successfully. Aug 13 07:17:05.717763 containerd[1574]: time="2025-08-13T07:17:05.717665165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:05.728836 containerd[1574]: time="2025-08-13T07:17:05.728782666Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:17:05.774192 containerd[1574]: time="2025-08-13T07:17:05.774155088Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:05.806813 containerd[1574]: time="2025-08-13T07:17:05.806716653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:05.807765 containerd[1574]: time="2025-08-13T07:17:05.807716650Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 3.838811387s" Aug 13 07:17:05.807818 containerd[1574]: time="2025-08-13T07:17:05.807770236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:17:05.808406 containerd[1574]: time="2025-08-13T07:17:05.808375602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:17:07.043520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056061485.mount: Deactivated successfully. Aug 13 07:17:08.105218 containerd[1574]: time="2025-08-13T07:17:08.105128715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:08.107835 containerd[1574]: time="2025-08-13T07:17:08.107779687Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:17:08.109341 containerd[1574]: time="2025-08-13T07:17:08.109303159Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:08.113023 containerd[1574]: time="2025-08-13T07:17:08.112966330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:08.114310 containerd[1574]: time="2025-08-13T07:17:08.114268502Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.30585686s" Aug 13 07:17:08.114385 containerd[1574]: time="2025-08-13T07:17:08.114311393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:17:08.114986 containerd[1574]: time="2025-08-13T07:17:08.114940903Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:17:08.795081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318367020.mount: Deactivated successfully. Aug 13 07:17:08.802128 containerd[1574]: time="2025-08-13T07:17:08.802049989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:08.802802 containerd[1574]: time="2025-08-13T07:17:08.802743530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:17:08.804070 containerd[1574]: time="2025-08-13T07:17:08.804030710Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:08.806606 containerd[1574]: time="2025-08-13T07:17:08.806564030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:08.807570 containerd[1574]: time="2025-08-13T07:17:08.807533166Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 692.561358ms" Aug 13 07:17:08.807570 containerd[1574]: time="2025-08-13T07:17:08.807567447Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:17:08.808194 containerd[1574]: time="2025-08-13T07:17:08.808145976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:17:09.418256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500268528.mount: Deactivated successfully. Aug 13 07:17:10.105934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:17:10.117085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:10.423161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:10.428581 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:17:10.711534 kubelet[2121]: E0813 07:17:10.711365 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:17:10.715937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:17:10.716253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:17:13.711819 containerd[1574]: time="2025-08-13T07:17:13.711726289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:13.712781 containerd[1574]: time="2025-08-13T07:17:13.712683534Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:17:13.714397 containerd[1574]: time="2025-08-13T07:17:13.714337205Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:13.717973 containerd[1574]: time="2025-08-13T07:17:13.717924558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:13.719239 containerd[1574]: time="2025-08-13T07:17:13.719186155Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.91100608s" Aug 13 07:17:13.719239 containerd[1574]: time="2025-08-13T07:17:13.719220660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:17:15.565895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:15.576152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:15.602325 systemd[1]: Reloading requested from client PID 2187 ('systemctl') (unit session-7.scope)... Aug 13 07:17:15.602349 systemd[1]: Reloading... Aug 13 07:17:15.698014 zram_generator::config[2232]: No configuration found. Aug 13 07:17:16.087934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:16.166889 systemd[1]: Reloading finished in 564 ms. Aug 13 07:17:16.210000 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:17:16.210112 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:17:16.210518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:16.220212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:16.385911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:16.390998 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:17:16.430029 kubelet[2286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:16.430029 kubelet[2286]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:17:16.430029 kubelet[2286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:16.430504 kubelet[2286]: I0813 07:17:16.430085 2286 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:17:16.725603 kubelet[2286]: I0813 07:17:16.725477 2286 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:17:16.725603 kubelet[2286]: I0813 07:17:16.725508 2286 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:17:16.725778 kubelet[2286]: I0813 07:17:16.725755 2286 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:17:16.748031 kubelet[2286]: E0813 07:17:16.746120 2286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:16.748031 kubelet[2286]: I0813 07:17:16.747945 2286 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:17:16.754611 kubelet[2286]: E0813 07:17:16.754572 2286 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:17:16.754611 kubelet[2286]: I0813 07:17:16.754605 2286 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:17:16.761203 kubelet[2286]: I0813 07:17:16.761181 2286 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:17:16.761941 kubelet[2286]: I0813 07:17:16.761909 2286 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:17:16.762085 kubelet[2286]: I0813 07:17:16.762053 2286 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:17:16.762239 kubelet[2286]: I0813 07:17:16.762082 2286 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:17:16.762409 kubelet[2286]: I0813 07:17:16.762253 2286 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:17:16.762409 kubelet[2286]: I0813 07:17:16.762264 2286 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:17:16.762409 kubelet[2286]: I0813 07:17:16.762391 2286 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:16.764459 kubelet[2286]: I0813 07:17:16.764423 2286 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:17:16.764459 kubelet[2286]: I0813 07:17:16.764458 2286 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:17:16.764539 kubelet[2286]: I0813 07:17:16.764496 2286 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:17:16.764539 kubelet[2286]: I0813 07:17:16.764516 2286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:17:16.768115 kubelet[2286]: I0813 07:17:16.768067 2286 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:17:16.768714 kubelet[2286]: I0813 07:17:16.768547 2286 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:17:16.769738 kubelet[2286]: W0813 07:17:16.769656 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:16.769738 kubelet[2286]: E0813 07:17:16.769710 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:16.769738 kubelet[2286]: W0813 07:17:16.769682 2286 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:17:16.769905 kubelet[2286]: W0813 07:17:16.769749 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:16.769905 kubelet[2286]: E0813 07:17:16.769803 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:16.771557 kubelet[2286]: I0813 07:17:16.771521 2286 server.go:1274] "Started kubelet" Aug 13 07:17:16.771678 kubelet[2286]: I0813 07:17:16.771584 2286 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:17:16.771798 kubelet[2286]: I0813 07:17:16.771767 2286 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:17:16.774916 kubelet[2286]: I0813 07:17:16.774800 2286 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:17:16.775049 kubelet[2286]: I0813 07:17:16.774967 2286 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:17:16.776302 kubelet[2286]: I0813 07:17:16.776278 2286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:17:16.777710 kubelet[2286]: I0813 07:17:16.777538 2286 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:17:16.779656 kubelet[2286]: I0813 07:17:16.778917 2286 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:17:16.779656 kubelet[2286]: I0813 07:17:16.779071 2286 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:17:16.779656 kubelet[2286]: I0813 07:17:16.779122 2286 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:17:16.779656 kubelet[2286]: W0813 07:17:16.779416 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:16.779656 kubelet[2286]: E0813 07:17:16.779454 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:16.779656 kubelet[2286]: E0813 07:17:16.779492 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:16.780110 kubelet[2286]: E0813 07:17:16.778188 2286 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b425a9ee8ee22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:17:16.771495458 +0000 UTC m=+0.376348018,LastTimestamp:2025-08-13 07:17:16.771495458 +0000 UTC m=+0.376348018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:17:16.780563 kubelet[2286]: I0813 07:17:16.780535 2286 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:17:16.780658 kubelet[2286]: I0813 07:17:16.780634 2286 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:17:16.781562 kubelet[2286]: E0813 07:17:16.781528 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Aug 13 07:17:16.781928 kubelet[2286]: E0813 07:17:16.781905 2286 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:17:16.782558 kubelet[2286]: I0813 07:17:16.782539 2286 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:17:16.798561 kubelet[2286]: I0813 07:17:16.798362 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:17:16.799776 kubelet[2286]: I0813 07:17:16.799760 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:17:16.799850 kubelet[2286]: I0813 07:17:16.799840 2286 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:17:16.799934 kubelet[2286]: I0813 07:17:16.799923 2286 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:17:16.800066 kubelet[2286]: E0813 07:17:16.800046 2286 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:17:16.800694 kubelet[2286]: W0813 07:17:16.800673 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:16.801145 kubelet[2286]: E0813 07:17:16.801124 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:16.805952 kubelet[2286]: I0813 07:17:16.805928 2286 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:17:16.805952 kubelet[2286]: I0813 07:17:16.805947 2286 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:17:16.806037 kubelet[2286]: I0813 07:17:16.805968 2286 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:16.880448 kubelet[2286]: E0813 07:17:16.880363 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:16.900733 kubelet[2286]: E0813 07:17:16.900679 2286 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:17:16.981586 kubelet[2286]: E0813 07:17:16.981427 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:16.983110 kubelet[2286]: E0813 07:17:16.983077 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Aug 13 07:17:17.082650 kubelet[2286]: E0813 07:17:17.082570 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:17.101853 kubelet[2286]: E0813 07:17:17.101789 2286 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:17:17.177911 kubelet[2286]: I0813 07:17:17.177853 2286 policy_none.go:49] "None policy: Start" Aug 13 07:17:17.178738 kubelet[2286]: I0813 07:17:17.178720 2286 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:17:17.178738 kubelet[2286]: I0813 07:17:17.178744 2286 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:17:17.182734 kubelet[2286]: E0813 07:17:17.182699 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:17.185660 kubelet[2286]: I0813 07:17:17.185631 2286 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:17:17.185988 kubelet[2286]: I0813 07:17:17.185956 2286 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:17:17.186119 kubelet[2286]: I0813 07:17:17.185988 2286 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:17:17.186407 kubelet[2286]: I0813 07:17:17.186344 2286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:17:17.187411 kubelet[2286]: E0813 07:17:17.187371 2286 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:17:17.289588 kubelet[2286]: I0813 07:17:17.289394 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:17:17.289939 kubelet[2286]: E0813 07:17:17.289889 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Aug 13 07:17:17.383828 kubelet[2286]: E0813 07:17:17.383761 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Aug 13 07:17:17.492051 kubelet[2286]: I0813 07:17:17.491998 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:17:17.492561 kubelet[2286]: E0813 07:17:17.492451 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Aug 13 07:17:17.584613 kubelet[2286]: I0813 07:17:17.584441 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53d551a9c3e5306664759116740cf33a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53d551a9c3e5306664759116740cf33a\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:17.584613 kubelet[2286]: I0813 07:17:17.584517 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53d551a9c3e5306664759116740cf33a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53d551a9c3e5306664759116740cf33a\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:17.584613 kubelet[2286]: I0813 07:17:17.584556 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:17.584613 kubelet[2286]: I0813 07:17:17.584582 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:17.584613 kubelet[2286]: I0813 07:17:17.584609 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:17.584894 kubelet[2286]: I0813 07:17:17.584634 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:17:17.584894 kubelet[2286]: I0813 07:17:17.584656 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53d551a9c3e5306664759116740cf33a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53d551a9c3e5306664759116740cf33a\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:17.584894 kubelet[2286]: I0813 07:17:17.584675 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:17.584894 kubelet[2286]: I0813 07:17:17.584696 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:17.630090 kubelet[2286]: W0813 07:17:17.629998 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:17.630090 kubelet[2286]: E0813 07:17:17.630078 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:17.810616 kubelet[2286]: E0813 07:17:17.810575 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:17.810772 kubelet[2286]: E0813 07:17:17.810582 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:17.811275 containerd[1574]: time="2025-08-13T07:17:17.811232505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:17.811837 containerd[1574]: time="2025-08-13T07:17:17.811422723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53d551a9c3e5306664759116740cf33a,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:17.813405 kubelet[2286]: E0813 07:17:17.813387 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:17.813694 containerd[1574]: time="2025-08-13T07:17:17.813656844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:17.894537 kubelet[2286]: I0813 07:17:17.894497 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:17:17.894817 kubelet[2286]: E0813 07:17:17.894793 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Aug 13 07:17:18.074156 kubelet[2286]: W0813 07:17:18.074063 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:18.074156 kubelet[2286]: E0813 07:17:18.074154 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:18.184667 kubelet[2286]: E0813 07:17:18.184515 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="1.6s" Aug 13 07:17:18.302219 kubelet[2286]: W0813 07:17:18.302141 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:18.302219 kubelet[2286]: E0813 07:17:18.302217 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:18.325745 kubelet[2286]: W0813 07:17:18.325702 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Aug 13 07:17:18.325745 kubelet[2286]: E0813 07:17:18.325738 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:18.681023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941139302.mount: Deactivated successfully. Aug 13 07:17:18.687630 containerd[1574]: time="2025-08-13T07:17:18.687546483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:18.689300 containerd[1574]: time="2025-08-13T07:17:18.689233282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:17:18.690377 containerd[1574]: time="2025-08-13T07:17:18.690332412Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:18.691232 containerd[1574]: time="2025-08-13T07:17:18.691195513Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:18.692450 containerd[1574]: time="2025-08-13T07:17:18.692390986Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:18.693138 containerd[1574]: time="2025-08-13T07:17:18.693105738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:17:18.694062 containerd[1574]: time="2025-08-13T07:17:18.694014780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:17:18.695859 containerd[1574]: time="2025-08-13T07:17:18.695806234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:18.696203 kubelet[2286]: I0813 07:17:18.696174 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:17:18.696675 kubelet[2286]: E0813 07:17:18.696478 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Aug 13 07:17:18.697866 containerd[1574]: time="2025-08-13T07:17:18.697804208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 884.082549ms" Aug 13 07:17:18.699130 containerd[1574]: time="2025-08-13T07:17:18.699093706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 887.759615ms" Aug 13 07:17:18.702890 containerd[1574]: time="2025-08-13T07:17:18.702835139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 891.328137ms" Aug 13 07:17:18.849578 containerd[1574]: time="2025-08-13T07:17:18.849307431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:18.849578 containerd[1574]: time="2025-08-13T07:17:18.849373986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:18.849578 containerd[1574]: time="2025-08-13T07:17:18.849389366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.849578 containerd[1574]: time="2025-08-13T07:17:18.849489108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.852417 containerd[1574]: time="2025-08-13T07:17:18.852306670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:18.852417 containerd[1574]: time="2025-08-13T07:17:18.852375550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:18.852417 containerd[1574]: time="2025-08-13T07:17:18.852390549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.852580 containerd[1574]: time="2025-08-13T07:17:18.852501189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.852957 containerd[1574]: time="2025-08-13T07:17:18.852518705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:18.852957 containerd[1574]: time="2025-08-13T07:17:18.852617404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:18.852957 containerd[1574]: time="2025-08-13T07:17:18.852629576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.852957 containerd[1574]: time="2025-08-13T07:17:18.852765622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.859713 kubelet[2286]: E0813 07:17:18.859595 2286 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b425a9ee8ee22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:17:16.771495458 +0000 UTC m=+0.376348018,LastTimestamp:2025-08-13 07:17:16.771495458 +0000 UTC m=+0.376348018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:17:18.913018 containerd[1574]: time="2025-08-13T07:17:18.912796646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc95e5a30333ff7471bba835339e8f19478b50335e55374d03ad18ff0b777f48\"" Aug 13 07:17:18.913660 containerd[1574]: time="2025-08-13T07:17:18.913632857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f8fb511b5a2ecd65dcf61a00951d000884761de40cdccab0ba3333fd67c011\"" Aug 13 07:17:18.915515 containerd[1574]: time="2025-08-13T07:17:18.915459663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53d551a9c3e5306664759116740cf33a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5107b07adfc3a2c13d3987eec67093686a630830bb9698cc084fb10cc72a41b\"" Aug 13 07:17:18.915759 kubelet[2286]: E0813 07:17:18.915730 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:18.916194 kubelet[2286]: E0813 07:17:18.916154 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:18.916982 kubelet[2286]: E0813 07:17:18.916949 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:18.918522 containerd[1574]: time="2025-08-13T07:17:18.918474632Z" level=info msg="CreateContainer within sandbox \"cc95e5a30333ff7471bba835339e8f19478b50335e55374d03ad18ff0b777f48\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:17:18.919148 containerd[1574]: time="2025-08-13T07:17:18.918657872Z" level=info msg="CreateContainer within sandbox \"87f8fb511b5a2ecd65dcf61a00951d000884761de40cdccab0ba3333fd67c011\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:17:18.919830 containerd[1574]: time="2025-08-13T07:17:18.919772171Z" level=info msg="CreateContainer within sandbox \"b5107b07adfc3a2c13d3987eec67093686a630830bb9698cc084fb10cc72a41b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:17:18.931129 kubelet[2286]: E0813 07:17:18.931041 2286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:18.940362 containerd[1574]: time="2025-08-13T07:17:18.940324008Z" level=info msg="CreateContainer within sandbox \"cc95e5a30333ff7471bba835339e8f19478b50335e55374d03ad18ff0b777f48\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0bf9c4c692a36125ea0b82d33d4626fb30f2aea50a73fef613c9f09807499a10\"" Aug 13 07:17:18.940832 containerd[1574]: time="2025-08-13T07:17:18.940802431Z" level=info msg="StartContainer for \"0bf9c4c692a36125ea0b82d33d4626fb30f2aea50a73fef613c9f09807499a10\"" Aug 13 07:17:18.946703 containerd[1574]: time="2025-08-13T07:17:18.946662345Z" level=info msg="CreateContainer within sandbox \"b5107b07adfc3a2c13d3987eec67093686a630830bb9698cc084fb10cc72a41b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4ce5ef6f993b661a1d6510b9ef70d2851464e11c419347e52fb89ef8155f7be\"" Aug 13 07:17:18.947134 containerd[1574]: time="2025-08-13T07:17:18.947106619Z" level=info msg="StartContainer for \"f4ce5ef6f993b661a1d6510b9ef70d2851464e11c419347e52fb89ef8155f7be\"" Aug 13 07:17:18.948843 containerd[1574]: time="2025-08-13T07:17:18.948793108Z" level=info msg="CreateContainer within sandbox \"87f8fb511b5a2ecd65dcf61a00951d000884761de40cdccab0ba3333fd67c011\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa702ad3dc3969188a3329a6546e9676ca59519f9337602fd558f35d8b22b80f\"" Aug 13 07:17:18.949313 containerd[1574]: time="2025-08-13T07:17:18.949289498Z" level=info msg="StartContainer for \"fa702ad3dc3969188a3329a6546e9676ca59519f9337602fd558f35d8b22b80f\"" Aug 13 07:17:19.017491 containerd[1574]: time="2025-08-13T07:17:19.017260490Z" level=info msg="StartContainer for \"f4ce5ef6f993b661a1d6510b9ef70d2851464e11c419347e52fb89ef8155f7be\" returns successfully" Aug 13 07:17:19.022688 containerd[1574]: time="2025-08-13T07:17:19.022617357Z" level=info msg="StartContainer for \"0bf9c4c692a36125ea0b82d33d4626fb30f2aea50a73fef613c9f09807499a10\" returns successfully" Aug 13 07:17:19.033909 containerd[1574]: time="2025-08-13T07:17:19.033842502Z" level=info msg="StartContainer for \"fa702ad3dc3969188a3329a6546e9676ca59519f9337602fd558f35d8b22b80f\" returns successfully" Aug 13 07:17:19.810312 kubelet[2286]: E0813 07:17:19.810272 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:19.811932 kubelet[2286]: E0813 07:17:19.811902 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:19.813012 kubelet[2286]: E0813 07:17:19.812990 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:20.191218 kubelet[2286]: E0813 07:17:20.191172 2286 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:17:20.300077 kubelet[2286]: I0813 07:17:20.300037 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:17:20.308216 kubelet[2286]: I0813 07:17:20.308168 2286 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:17:20.308216 kubelet[2286]: E0813 07:17:20.308206 2286 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:17:20.314573 kubelet[2286]: E0813 07:17:20.314534 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:20.415318 kubelet[2286]: E0813 07:17:20.415260 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:20.516227 kubelet[2286]: E0813 07:17:20.516082 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:20.617243 kubelet[2286]: E0813 07:17:20.617179 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:20.717866 kubelet[2286]: E0813 07:17:20.717790 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:20.814700 kubelet[2286]: E0813 07:17:20.814561 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:20.814700 kubelet[2286]: E0813 07:17:20.814675 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:20.818703 kubelet[2286]: E0813 07:17:20.818673 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:20.919373 kubelet[2286]: E0813 07:17:20.919303 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:21.767661 kubelet[2286]: I0813 07:17:21.767609 2286 apiserver.go:52] "Watching apiserver" Aug 13 07:17:21.779917 kubelet[2286]: I0813 07:17:21.779856 2286 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:17:22.552159 kubelet[2286]: E0813 07:17:22.552088 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:22.815427 kubelet[2286]: E0813 07:17:22.815310 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:23.897964 systemd[1]: Reloading requested from client PID 2556 ('systemctl') (unit session-7.scope)... Aug 13 07:17:23.897982 systemd[1]: Reloading... Aug 13 07:17:23.989923 zram_generator::config[2596]: No configuration found. Aug 13 07:17:24.127117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:24.212275 systemd[1]: Reloading finished in 313 ms. Aug 13 07:17:24.245554 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:24.274343 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:17:24.274784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:24.284301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:24.459490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:24.465733 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:17:24.577886 kubelet[2650]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:24.577886 kubelet[2650]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:17:24.577886 kubelet[2650]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:24.578367 kubelet[2650]: I0813 07:17:24.577984 2650 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:17:24.586371 kubelet[2650]: I0813 07:17:24.586349 2650 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:17:24.586371 kubelet[2650]: I0813 07:17:24.586369 2650 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:17:24.586646 kubelet[2650]: I0813 07:17:24.586619 2650 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:17:24.587884 kubelet[2650]: I0813 07:17:24.587855 2650 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:17:24.589723 kubelet[2650]: I0813 07:17:24.589690 2650 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:17:24.592561 kubelet[2650]: E0813 07:17:24.592524 2650 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:17:24.592561 kubelet[2650]: I0813 07:17:24.592556 2650 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:17:24.597762 kubelet[2650]: I0813 07:17:24.597723 2650 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:17:24.598207 kubelet[2650]: I0813 07:17:24.598183 2650 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:17:24.598346 kubelet[2650]: I0813 07:17:24.598311 2650 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:17:24.598482 kubelet[2650]: I0813 07:17:24.598340 2650 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:17:24.598564 kubelet[2650]: I0813 07:17:24.598487 2650 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:17:24.598564 kubelet[2650]: I0813 07:17:24.598496 2650 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:17:24.598564 kubelet[2650]: I0813 07:17:24.598519 2650 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:24.598644 kubelet[2650]: I0813 07:17:24.598626 2650 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:17:24.598644 kubelet[2650]: I0813 07:17:24.598641 2650 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:17:24.598683 kubelet[2650]: I0813 07:17:24.598671 2650 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:17:24.598683 kubelet[2650]: I0813 07:17:24.598681 2650 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:17:24.599419 kubelet[2650]: I0813 07:17:24.599391 2650 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:17:24.601222 kubelet[2650]: I0813 07:17:24.599729 2650 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:17:24.601222 kubelet[2650]: I0813 07:17:24.600219 2650 server.go:1274] "Started kubelet" Aug 13 07:17:24.601222 kubelet[2650]: I0813 07:17:24.601096 2650 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:17:24.602149 kubelet[2650]: I0813 07:17:24.602128 2650 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:17:24.602298 kubelet[2650]: I0813 07:17:24.602273 2650 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:17:24.602889 kubelet[2650]: I0813 07:17:24.602442 2650 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:17:24.607385 kubelet[2650]: I0813 07:17:24.607358 2650 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:17:24.609902 kubelet[2650]: I0813 07:17:24.609800 2650 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:17:24.611731 kubelet[2650]: I0813 07:17:24.611650 2650 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:17:24.612223 kubelet[2650]: E0813 07:17:24.612206 2650 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:17:24.612563 kubelet[2650]: I0813 07:17:24.612452 2650 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:17:24.612759 kubelet[2650]: I0813 07:17:24.612618 2650 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:17:24.613651 kubelet[2650]: I0813 07:17:24.613616 2650 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:17:24.613728 kubelet[2650]: E0813 07:17:24.613661 2650 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:17:24.613782 kubelet[2650]: I0813 07:17:24.613760 2650 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:17:24.621136 kubelet[2650]: I0813 07:17:24.621001 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:17:24.622418 kubelet[2650]: I0813 07:17:24.622398 2650 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:17:24.622738 kubelet[2650]: I0813 07:17:24.622711 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:17:24.622818 kubelet[2650]: I0813 07:17:24.622742 2650 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:17:24.622818 kubelet[2650]: I0813 07:17:24.622765 2650 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:17:24.622818 kubelet[2650]: E0813 07:17:24.622813 2650 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:17:24.675784 kubelet[2650]: I0813 07:17:24.675751 2650 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:17:24.675784 kubelet[2650]: I0813 07:17:24.675770 2650 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:17:24.675784 kubelet[2650]: I0813 07:17:24.675792 2650 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:24.676009 kubelet[2650]: I0813 07:17:24.675979 2650 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:17:24.676009 kubelet[2650]: I0813 07:17:24.675991 2650 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:17:24.676055 kubelet[2650]: I0813 07:17:24.676014 2650 policy_none.go:49] "None policy: Start" Aug 13 07:17:24.676762 kubelet[2650]: I0813 07:17:24.676727 2650 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:17:24.676762 kubelet[2650]: I0813 07:17:24.676762 2650 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:17:24.676965 kubelet[2650]: I0813 07:17:24.676943 2650 state_mem.go:75] "Updated machine memory state" Aug 13 07:17:24.678840 kubelet[2650]: I0813 07:17:24.678528 2650 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:17:24.678840 kubelet[2650]: I0813 07:17:24.678724 2650 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:17:24.678840 kubelet[2650]: I0813 07:17:24.678738 2650 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:17:24.678942 kubelet[2650]: I0813 07:17:24.678927 2650 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:17:24.730057 kubelet[2650]: E0813 07:17:24.729890 2650 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:24.784449 kubelet[2650]: I0813 07:17:24.784399 2650 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:17:24.790279 kubelet[2650]: I0813 07:17:24.790242 2650 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 07:17:24.790347 kubelet[2650]: I0813 07:17:24.790329 2650 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:17:24.914557 kubelet[2650]: I0813 07:17:24.914459 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:24.914739 kubelet[2650]: I0813 07:17:24.914619 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:17:24.914739 kubelet[2650]: I0813 07:17:24.914679 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:24.914739 kubelet[2650]: I0813 07:17:24.914706 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53d551a9c3e5306664759116740cf33a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53d551a9c3e5306664759116740cf33a\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:24.914739 kubelet[2650]: I0813 07:17:24.914734 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53d551a9c3e5306664759116740cf33a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53d551a9c3e5306664759116740cf33a\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:24.914932 kubelet[2650]: I0813 07:17:24.914750 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53d551a9c3e5306664759116740cf33a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53d551a9c3e5306664759116740cf33a\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:24.914932 kubelet[2650]: I0813 07:17:24.914833 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:24.914932 kubelet[2650]: I0813 07:17:24.914849 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:24.914932 kubelet[2650]: I0813 07:17:24.914897 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:25.031420 kubelet[2650]: E0813 07:17:25.031263 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.031420 kubelet[2650]: E0813 07:17:25.031265 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.031420 kubelet[2650]: E0813 07:17:25.031277 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.599130 kubelet[2650]: I0813 07:17:25.599075 2650 apiserver.go:52] "Watching apiserver" Aug 13 07:17:25.613810 kubelet[2650]: I0813 07:17:25.613631 2650 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:17:25.853904 kubelet[2650]: E0813 07:17:25.851140 2650 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:17:25.853904 kubelet[2650]: E0813 07:17:25.851376 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.853904 kubelet[2650]: I0813 07:17:25.851476 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.851463712 podStartE2EDuration="3.851463712s" podCreationTimestamp="2025-08-13 07:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:25.851237337 +0000 UTC m=+1.376306466" watchObservedRunningTime="2025-08-13 07:17:25.851463712 +0000 UTC m=+1.376532831" Aug 13 07:17:25.853904 kubelet[2650]: E0813 07:17:25.851642 2650 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:17:25.853904 kubelet[2650]: E0813 07:17:25.851754 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.853904 kubelet[2650]: E0813 07:17:25.851897 2650 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:17:25.853904 kubelet[2650]: E0813 07:17:25.852005 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:26.001911 kubelet[2650]: I0813 07:17:26.001833 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.00181867 podStartE2EDuration="2.00181867s" podCreationTimestamp="2025-08-13 07:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:26.001380733 +0000 UTC m=+1.526449852" watchObservedRunningTime="2025-08-13 07:17:26.00181867 +0000 UTC m=+1.526887789" Aug 13 07:17:26.476799 kubelet[2650]: I0813 07:17:26.476706 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.47668518 podStartE2EDuration="2.47668518s" podCreationTimestamp="2025-08-13 07:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:26.230592097 +0000 UTC m=+1.755661216" watchObservedRunningTime="2025-08-13 07:17:26.47668518 +0000 UTC m=+2.001754299" Aug 13 07:17:26.635273 kubelet[2650]: E0813 07:17:26.635238 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:26.635811 kubelet[2650]: E0813 07:17:26.635412 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:26.635811 kubelet[2650]: E0813 07:17:26.635434 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:27.637853 kubelet[2650]: E0813 07:17:27.637807 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:27.638455 kubelet[2650]: E0813 07:17:27.637904 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:28.593514 kubelet[2650]: I0813 07:17:28.593469 2650 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:17:28.593985 containerd[1574]: time="2025-08-13T07:17:28.593933771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:17:28.594381 kubelet[2650]: I0813 07:17:28.594142 2650 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:17:29.178279 kubelet[2650]: E0813 07:17:29.178243 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:29.640865 kubelet[2650]: E0813 07:17:29.640822 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:29.942988 kubelet[2650]: I0813 07:17:29.942824 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bc8540ea-6a43-428b-8be5-8e8328fcd0fc-kube-proxy\") pod \"kube-proxy-d2bjf\" (UID: \"bc8540ea-6a43-428b-8be5-8e8328fcd0fc\") " pod="kube-system/kube-proxy-d2bjf" Aug 13 07:17:29.942988 kubelet[2650]: I0813 07:17:29.942868 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzjn4\" (UniqueName: \"kubernetes.io/projected/bc8540ea-6a43-428b-8be5-8e8328fcd0fc-kube-api-access-wzjn4\") pod \"kube-proxy-d2bjf\" (UID: \"bc8540ea-6a43-428b-8be5-8e8328fcd0fc\") " pod="kube-system/kube-proxy-d2bjf" Aug 13 07:17:29.942988 kubelet[2650]: I0813 07:17:29.942909 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc8540ea-6a43-428b-8be5-8e8328fcd0fc-xtables-lock\") pod \"kube-proxy-d2bjf\" (UID: \"bc8540ea-6a43-428b-8be5-8e8328fcd0fc\") " pod="kube-system/kube-proxy-d2bjf" Aug 13 07:17:29.942988 kubelet[2650]: I0813 07:17:29.942941 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc8540ea-6a43-428b-8be5-8e8328fcd0fc-lib-modules\") pod \"kube-proxy-d2bjf\" (UID: \"bc8540ea-6a43-428b-8be5-8e8328fcd0fc\") " pod="kube-system/kube-proxy-d2bjf" Aug 13 07:17:30.165472 kubelet[2650]: E0813 07:17:30.165422 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:30.345091 kubelet[2650]: I0813 07:17:30.344961 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwjh\" (UniqueName: \"kubernetes.io/projected/e286833a-6fe5-4d60-a07c-bde19603a031-kube-api-access-rmwjh\") pod \"tigera-operator-5bf8dfcb4-vccbj\" (UID: \"e286833a-6fe5-4d60-a07c-bde19603a031\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vccbj" Aug 13 07:17:30.345091 kubelet[2650]: I0813 07:17:30.344991 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e286833a-6fe5-4d60-a07c-bde19603a031-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-vccbj\" (UID: \"e286833a-6fe5-4d60-a07c-bde19603a031\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vccbj" Aug 13 07:17:30.355582 kubelet[2650]: E0813 07:17:30.355551 2650 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:17:30.355582 kubelet[2650]: E0813 07:17:30.355580 2650 projected.go:194] Error preparing data for projected volume kube-api-access-wzjn4 for pod kube-system/kube-proxy-d2bjf: configmap "kube-root-ca.crt" not found Aug 13 07:17:30.355685 kubelet[2650]: E0813 07:17:30.355637 2650 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc8540ea-6a43-428b-8be5-8e8328fcd0fc-kube-api-access-wzjn4 podName:bc8540ea-6a43-428b-8be5-8e8328fcd0fc nodeName:}" failed. No retries permitted until 2025-08-13 07:17:30.855616843 +0000 UTC m=+6.380685962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzjn4" (UniqueName: "kubernetes.io/projected/bc8540ea-6a43-428b-8be5-8e8328fcd0fc-kube-api-access-wzjn4") pod "kube-proxy-d2bjf" (UID: "bc8540ea-6a43-428b-8be5-8e8328fcd0fc") : configmap "kube-root-ca.crt" not found Aug 13 07:17:30.618617 kubelet[2650]: E0813 07:17:30.618570 2650 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:17:30.618617 kubelet[2650]: E0813 07:17:30.618607 2650 projected.go:194] Error preparing data for projected volume kube-api-access-rmwjh for pod tigera-operator/tigera-operator-5bf8dfcb4-vccbj: configmap "kube-root-ca.crt" not found Aug 13 07:17:30.618924 kubelet[2650]: E0813 07:17:30.618660 2650 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e286833a-6fe5-4d60-a07c-bde19603a031-kube-api-access-rmwjh podName:e286833a-6fe5-4d60-a07c-bde19603a031 nodeName:}" failed. No retries permitted until 2025-08-13 07:17:31.118643021 +0000 UTC m=+6.643712140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rmwjh" (UniqueName: "kubernetes.io/projected/e286833a-6fe5-4d60-a07c-bde19603a031-kube-api-access-rmwjh") pod "tigera-operator-5bf8dfcb4-vccbj" (UID: "e286833a-6fe5-4d60-a07c-bde19603a031") : configmap "kube-root-ca.crt" not found Aug 13 07:17:30.642217 kubelet[2650]: E0813 07:17:30.642187 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:30.642217 kubelet[2650]: E0813 07:17:30.642196 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:31.047251 kubelet[2650]: E0813 07:17:31.047100 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:31.047964 containerd[1574]: time="2025-08-13T07:17:31.047919685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2bjf,Uid:bc8540ea-6a43-428b-8be5-8e8328fcd0fc,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:31.077496 containerd[1574]: time="2025-08-13T07:17:31.077403066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:31.077496 containerd[1574]: time="2025-08-13T07:17:31.077464171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:31.077496 containerd[1574]: time="2025-08-13T07:17:31.077475557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:31.077730 containerd[1574]: time="2025-08-13T07:17:31.077598316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:31.125344 containerd[1574]: time="2025-08-13T07:17:31.125291592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2bjf,Uid:bc8540ea-6a43-428b-8be5-8e8328fcd0fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"faecdc53aba03fb27a7451ee25b79db82f547e6655ffc726a9e0baa365d8c19e\"" Aug 13 07:17:31.126284 kubelet[2650]: E0813 07:17:31.126254 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:31.128460 containerd[1574]: time="2025-08-13T07:17:31.128379604Z" level=info msg="CreateContainer within sandbox \"faecdc53aba03fb27a7451ee25b79db82f547e6655ffc726a9e0baa365d8c19e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:17:31.145274 containerd[1574]: time="2025-08-13T07:17:31.145236137Z" level=info msg="CreateContainer within sandbox \"faecdc53aba03fb27a7451ee25b79db82f547e6655ffc726a9e0baa365d8c19e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d420e7a7d66da019b0b1cbbe847beecc6bc69ab1482bc06a5bddc9dd26d0eff2\"" Aug 13 07:17:31.145943 containerd[1574]: time="2025-08-13T07:17:31.145864038Z" level=info msg="StartContainer for \"d420e7a7d66da019b0b1cbbe847beecc6bc69ab1482bc06a5bddc9dd26d0eff2\"" Aug 13 07:17:31.197462 containerd[1574]: time="2025-08-13T07:17:31.197369604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vccbj,Uid:e286833a-6fe5-4d60-a07c-bde19603a031,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:17:31.208591 containerd[1574]: time="2025-08-13T07:17:31.208545610Z" level=info msg="StartContainer for \"d420e7a7d66da019b0b1cbbe847beecc6bc69ab1482bc06a5bddc9dd26d0eff2\" returns successfully" Aug 13 07:17:31.225814 containerd[1574]: time="2025-08-13T07:17:31.225651801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:31.225814 containerd[1574]: time="2025-08-13T07:17:31.225733273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:31.225814 containerd[1574]: time="2025-08-13T07:17:31.225760919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:31.226263 containerd[1574]: time="2025-08-13T07:17:31.226134299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:31.285500 containerd[1574]: time="2025-08-13T07:17:31.285446715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vccbj,Uid:e286833a-6fe5-4d60-a07c-bde19603a031,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"af67e3c310da9275d9374e235e3c2177d67692b08edb3b8158c67dd5bd2420c2\"" Aug 13 07:17:31.289332 containerd[1574]: time="2025-08-13T07:17:31.289123858Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:17:31.645388 kubelet[2650]: E0813 07:17:31.645269 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:32.779608 update_engine[1557]: I20250813 07:17:32.779433 1557 update_attempter.cc:509] Updating boot flags... Aug 13 07:17:32.808363 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2954) Aug 13 07:17:32.848905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2954) Aug 13 07:17:33.137414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946154750.mount: Deactivated successfully. Aug 13 07:17:33.756300 containerd[1574]: time="2025-08-13T07:17:33.756229200Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:33.759841 containerd[1574]: time="2025-08-13T07:17:33.759761185Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:17:33.761429 containerd[1574]: time="2025-08-13T07:17:33.761395172Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:33.764589 containerd[1574]: time="2025-08-13T07:17:33.764544561Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:33.765252 containerd[1574]: time="2025-08-13T07:17:33.765197942Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.476023656s" Aug 13 07:17:33.765252 containerd[1574]: time="2025-08-13T07:17:33.765245272Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:17:33.767564 containerd[1574]: time="2025-08-13T07:17:33.767538222Z" level=info msg="CreateContainer within sandbox \"af67e3c310da9275d9374e235e3c2177d67692b08edb3b8158c67dd5bd2420c2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:17:33.883652 containerd[1574]: time="2025-08-13T07:17:33.883580673Z" level=info msg="CreateContainer within sandbox \"af67e3c310da9275d9374e235e3c2177d67692b08edb3b8158c67dd5bd2420c2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1712e22f9527313eb0995d1cd3fc622d80c0bfb5b29a5036a2bfc0dc8f77041e\"" Aug 13 07:17:33.884268 containerd[1574]: time="2025-08-13T07:17:33.884218226Z" level=info msg="StartContainer for \"1712e22f9527313eb0995d1cd3fc622d80c0bfb5b29a5036a2bfc0dc8f77041e\"" Aug 13 07:17:34.005320 containerd[1574]: time="2025-08-13T07:17:34.005230735Z" level=info msg="StartContainer for \"1712e22f9527313eb0995d1cd3fc622d80c0bfb5b29a5036a2bfc0dc8f77041e\" returns successfully" Aug 13 07:17:34.663197 kubelet[2650]: I0813 07:17:34.663107 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d2bjf" podStartSLOduration=5.663086141 podStartE2EDuration="5.663086141s" podCreationTimestamp="2025-08-13 07:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:31.65593208 +0000 UTC m=+7.181001199" watchObservedRunningTime="2025-08-13 07:17:34.663086141 +0000 UTC m=+10.188155280" Aug 13 07:17:34.663709 kubelet[2650]: I0813 07:17:34.663269 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-vccbj" podStartSLOduration=3.18551677 podStartE2EDuration="5.663261873s" podCreationTimestamp="2025-08-13 07:17:29 +0000 UTC" firstStartedPulling="2025-08-13 07:17:31.288368446 +0000 UTC m=+6.813437565" lastFinishedPulling="2025-08-13 07:17:33.766113559 +0000 UTC m=+9.291182668" observedRunningTime="2025-08-13 07:17:34.66299889 +0000 UTC m=+10.188068019" watchObservedRunningTime="2025-08-13 07:17:34.663261873 +0000 UTC m=+10.188331012" Aug 13 07:17:37.049298 kubelet[2650]: E0813 07:17:37.048944 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:40.495384 sudo[1766]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:40.497773 sshd[1759]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:40.502048 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:44536.service: Deactivated successfully. Aug 13 07:17:40.506546 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:17:40.509043 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:17:40.510811 systemd-logind[1556]: Removed session 7. Aug 13 07:17:43.332760 kubelet[2650]: I0813 07:17:43.332568 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f0afb3-7ea5-41a4-95e2-985491d142c4-tigera-ca-bundle\") pod \"calico-typha-f5c4c4d79-krg2m\" (UID: \"e7f0afb3-7ea5-41a4-95e2-985491d142c4\") " pod="calico-system/calico-typha-f5c4c4d79-krg2m" Aug 13 07:17:43.332760 kubelet[2650]: I0813 07:17:43.332648 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e7f0afb3-7ea5-41a4-95e2-985491d142c4-typha-certs\") pod \"calico-typha-f5c4c4d79-krg2m\" (UID: \"e7f0afb3-7ea5-41a4-95e2-985491d142c4\") " pod="calico-system/calico-typha-f5c4c4d79-krg2m" Aug 13 07:17:43.332760 kubelet[2650]: I0813 07:17:43.332675 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp79z\" (UniqueName: \"kubernetes.io/projected/e7f0afb3-7ea5-41a4-95e2-985491d142c4-kube-api-access-vp79z\") pod \"calico-typha-f5c4c4d79-krg2m\" (UID: \"e7f0afb3-7ea5-41a4-95e2-985491d142c4\") " pod="calico-system/calico-typha-f5c4c4d79-krg2m" Aug 13 07:17:43.571022 kubelet[2650]: E0813 07:17:43.570978 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:43.571673 containerd[1574]: time="2025-08-13T07:17:43.571620007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5c4c4d79-krg2m,Uid:e7f0afb3-7ea5-41a4-95e2-985491d142c4,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:43.603038 containerd[1574]: time="2025-08-13T07:17:43.602751950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:43.603038 containerd[1574]: time="2025-08-13T07:17:43.602833024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:43.603038 containerd[1574]: time="2025-08-13T07:17:43.602850252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:43.603382 containerd[1574]: time="2025-08-13T07:17:43.603305741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:43.634237 kubelet[2650]: I0813 07:17:43.633974 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-cni-bin-dir\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634237 kubelet[2650]: I0813 07:17:43.634026 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmws\" (UniqueName: \"kubernetes.io/projected/b5852fad-7e71-45c7-b967-950ee2ce7f7d-kube-api-access-dlmws\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634237 kubelet[2650]: I0813 07:17:43.634045 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-lib-modules\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634237 kubelet[2650]: I0813 07:17:43.634061 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-cni-log-dir\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634237 kubelet[2650]: I0813 07:17:43.634075 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b5852fad-7e71-45c7-b967-950ee2ce7f7d-node-certs\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634521 kubelet[2650]: I0813 07:17:43.634088 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-cni-net-dir\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634521 kubelet[2650]: I0813 07:17:43.634110 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-var-run-calico\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634521 kubelet[2650]: I0813 07:17:43.634125 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-flexvol-driver-host\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634521 kubelet[2650]: I0813 07:17:43.634138 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-policysync\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634521 kubelet[2650]: I0813 07:17:43.634151 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5852fad-7e71-45c7-b967-950ee2ce7f7d-tigera-ca-bundle\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634640 kubelet[2650]: I0813 07:17:43.634170 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-var-lib-calico\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.634640 kubelet[2650]: I0813 07:17:43.634183 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5852fad-7e71-45c7-b967-950ee2ce7f7d-xtables-lock\") pod \"calico-node-tjp8s\" (UID: \"b5852fad-7e71-45c7-b967-950ee2ce7f7d\") " pod="calico-system/calico-node-tjp8s" Aug 13 07:17:43.662982 containerd[1574]: time="2025-08-13T07:17:43.662924199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5c4c4d79-krg2m,Uid:e7f0afb3-7ea5-41a4-95e2-985491d142c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"db868d6f4a64ef96712346a964fdf9f6777b984ce871e124a6e558a83335612a\"" Aug 13 07:17:43.663730 kubelet[2650]: E0813 07:17:43.663702 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:43.664693 containerd[1574]: time="2025-08-13T07:17:43.664656565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:17:43.736361 kubelet[2650]: E0813 07:17:43.736322 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:43.736361 kubelet[2650]: W0813 07:17:43.736350 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:43.736554 kubelet[2650]: E0813 07:17:43.736392 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:43.740430 kubelet[2650]: E0813 07:17:43.740390 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:43.740430 kubelet[2650]: W0813 07:17:43.740424 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:43.740545 kubelet[2650]: E0813 07:17:43.740453 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:43.743925 kubelet[2650]: E0813 07:17:43.743897 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:43.743925 kubelet[2650]: W0813 07:17:43.743924 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:43.744032 kubelet[2650]: E0813 07:17:43.743948 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:43.864143 containerd[1574]: time="2025-08-13T07:17:43.864095838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tjp8s,Uid:b5852fad-7e71-45c7-b967-950ee2ce7f7d,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:44.240546 kubelet[2650]: E0813 07:17:44.240150 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:44.276725 containerd[1574]: time="2025-08-13T07:17:44.276538483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:44.278946 containerd[1574]: time="2025-08-13T07:17:44.277425481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:44.278946 containerd[1574]: time="2025-08-13T07:17:44.277452429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:44.278946 containerd[1574]: time="2025-08-13T07:17:44.277656255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:44.330821 containerd[1574]: time="2025-08-13T07:17:44.330689401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tjp8s,Uid:b5852fad-7e71-45c7-b967-950ee2ce7f7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4352f40f5e7164f3fa668b71081b862fc056b787d573e52c0f43a89d3eaf0edb\"" Aug 13 07:17:44.338643 kubelet[2650]: E0813 07:17:44.338554 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.338643 kubelet[2650]: W0813 07:17:44.338576 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.338643 kubelet[2650]: E0813 07:17:44.338597 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.339306 kubelet[2650]: E0813 07:17:44.338862 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.339306 kubelet[2650]: W0813 07:17:44.338889 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.339306 kubelet[2650]: E0813 07:17:44.338902 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.339306 kubelet[2650]: E0813 07:17:44.339144 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.339306 kubelet[2650]: W0813 07:17:44.339167 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.339306 kubelet[2650]: E0813 07:17:44.339179 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.339506 kubelet[2650]: E0813 07:17:44.339408 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.339506 kubelet[2650]: W0813 07:17:44.339418 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.339506 kubelet[2650]: E0813 07:17:44.339430 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.339779 kubelet[2650]: E0813 07:17:44.339680 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.339779 kubelet[2650]: W0813 07:17:44.339694 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.339779 kubelet[2650]: E0813 07:17:44.339706 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.339995 kubelet[2650]: E0813 07:17:44.339977 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.339995 kubelet[2650]: W0813 07:17:44.339991 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.340077 kubelet[2650]: E0813 07:17:44.340010 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.340287 kubelet[2650]: E0813 07:17:44.340271 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.340287 kubelet[2650]: W0813 07:17:44.340285 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.340394 kubelet[2650]: E0813 07:17:44.340297 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.340543 kubelet[2650]: E0813 07:17:44.340523 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.340543 kubelet[2650]: W0813 07:17:44.340536 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.340628 kubelet[2650]: E0813 07:17:44.340550 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.340889 kubelet[2650]: E0813 07:17:44.340834 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.341003 kubelet[2650]: W0813 07:17:44.340982 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.341051 kubelet[2650]: E0813 07:17:44.341002 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.341424 kubelet[2650]: E0813 07:17:44.341245 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.341424 kubelet[2650]: W0813 07:17:44.341256 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.341424 kubelet[2650]: E0813 07:17:44.341266 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.341550 kubelet[2650]: E0813 07:17:44.341473 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.341550 kubelet[2650]: W0813 07:17:44.341482 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.341550 kubelet[2650]: E0813 07:17:44.341492 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.341698 kubelet[2650]: E0813 07:17:44.341684 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.341698 kubelet[2650]: W0813 07:17:44.341695 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.341783 kubelet[2650]: E0813 07:17:44.341705 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.342346 kubelet[2650]: E0813 07:17:44.342206 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.342346 kubelet[2650]: W0813 07:17:44.342219 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.342346 kubelet[2650]: E0813 07:17:44.342232 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.342546 kubelet[2650]: E0813 07:17:44.342461 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.342546 kubelet[2650]: W0813 07:17:44.342470 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.342546 kubelet[2650]: E0813 07:17:44.342478 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.342667 kubelet[2650]: E0813 07:17:44.342641 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.342667 kubelet[2650]: W0813 07:17:44.342654 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.342667 kubelet[2650]: E0813 07:17:44.342662 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.342894 kubelet[2650]: E0813 07:17:44.342834 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.342894 kubelet[2650]: W0813 07:17:44.342845 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.342894 kubelet[2650]: E0813 07:17:44.342853 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.343123 kubelet[2650]: E0813 07:17:44.343054 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.343123 kubelet[2650]: W0813 07:17:44.343067 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.343123 kubelet[2650]: E0813 07:17:44.343076 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.343308 kubelet[2650]: E0813 07:17:44.343291 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.343308 kubelet[2650]: W0813 07:17:44.343303 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.343370 kubelet[2650]: E0813 07:17:44.343312 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.343510 kubelet[2650]: E0813 07:17:44.343495 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.343510 kubelet[2650]: W0813 07:17:44.343505 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.343563 kubelet[2650]: E0813 07:17:44.343512 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.343690 kubelet[2650]: E0813 07:17:44.343677 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.343690 kubelet[2650]: W0813 07:17:44.343687 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.343736 kubelet[2650]: E0813 07:17:44.343695 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.439249 kubelet[2650]: E0813 07:17:44.439205 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.439249 kubelet[2650]: W0813 07:17:44.439229 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.439249 kubelet[2650]: E0813 07:17:44.439250 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.439452 kubelet[2650]: I0813 07:17:44.439277 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37049b96-5b1d-4b14-aa39-fa916253ae4c-kubelet-dir\") pod \"csi-node-driver-b99jc\" (UID: \"37049b96-5b1d-4b14-aa39-fa916253ae4c\") " pod="calico-system/csi-node-driver-b99jc" Aug 13 07:17:44.439506 kubelet[2650]: E0813 07:17:44.439481 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.439506 kubelet[2650]: W0813 07:17:44.439499 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.439633 kubelet[2650]: E0813 07:17:44.439517 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.439633 kubelet[2650]: I0813 07:17:44.439537 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37049b96-5b1d-4b14-aa39-fa916253ae4c-socket-dir\") pod \"csi-node-driver-b99jc\" (UID: \"37049b96-5b1d-4b14-aa39-fa916253ae4c\") " pod="calico-system/csi-node-driver-b99jc" Aug 13 07:17:44.439777 kubelet[2650]: E0813 07:17:44.439748 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.439777 kubelet[2650]: W0813 07:17:44.439763 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.439777 kubelet[2650]: E0813 07:17:44.439777 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.439999 kubelet[2650]: E0813 07:17:44.439973 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.439999 kubelet[2650]: W0813 07:17:44.439985 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.439999 kubelet[2650]: E0813 07:17:44.439997 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.440327 kubelet[2650]: E0813 07:17:44.440300 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.440327 kubelet[2650]: W0813 07:17:44.440316 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.440327 kubelet[2650]: E0813 07:17:44.440334 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.440533 kubelet[2650]: I0813 07:17:44.440352 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37049b96-5b1d-4b14-aa39-fa916253ae4c-registration-dir\") pod \"csi-node-driver-b99jc\" (UID: \"37049b96-5b1d-4b14-aa39-fa916253ae4c\") " pod="calico-system/csi-node-driver-b99jc" Aug 13 07:17:44.440607 kubelet[2650]: E0813 07:17:44.440594 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.440607 kubelet[2650]: W0813 07:17:44.440605 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.440668 kubelet[2650]: E0813 07:17:44.440619 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.440820 kubelet[2650]: E0813 07:17:44.440805 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.440860 kubelet[2650]: W0813 07:17:44.440825 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.440860 kubelet[2650]: E0813 07:17:44.440839 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.441059 kubelet[2650]: E0813 07:17:44.441043 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.441059 kubelet[2650]: W0813 07:17:44.441052 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.441122 kubelet[2650]: E0813 07:17:44.441066 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.441157 kubelet[2650]: I0813 07:17:44.441119 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/37049b96-5b1d-4b14-aa39-fa916253ae4c-varrun\") pod \"csi-node-driver-b99jc\" (UID: \"37049b96-5b1d-4b14-aa39-fa916253ae4c\") " pod="calico-system/csi-node-driver-b99jc" Aug 13 07:17:44.441443 kubelet[2650]: E0813 07:17:44.441413 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.441479 kubelet[2650]: W0813 07:17:44.441442 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.441479 kubelet[2650]: E0813 07:17:44.441473 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.441676 kubelet[2650]: E0813 07:17:44.441664 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.441676 kubelet[2650]: W0813 07:17:44.441675 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.441741 kubelet[2650]: E0813 07:17:44.441690 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.441741 kubelet[2650]: I0813 07:17:44.441707 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrh6q\" (UniqueName: \"kubernetes.io/projected/37049b96-5b1d-4b14-aa39-fa916253ae4c-kube-api-access-jrh6q\") pod \"csi-node-driver-b99jc\" (UID: \"37049b96-5b1d-4b14-aa39-fa916253ae4c\") " pod="calico-system/csi-node-driver-b99jc" Aug 13 07:17:44.441931 kubelet[2650]: E0813 07:17:44.441916 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.441931 kubelet[2650]: W0813 07:17:44.441928 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.441985 kubelet[2650]: E0813 07:17:44.441942 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.442183 kubelet[2650]: E0813 07:17:44.442148 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.442183 kubelet[2650]: W0813 07:17:44.442161 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.442183 kubelet[2650]: E0813 07:17:44.442170 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.442454 kubelet[2650]: E0813 07:17:44.442436 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.442498 kubelet[2650]: W0813 07:17:44.442456 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.442498 kubelet[2650]: E0813 07:17:44.442468 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.442701 kubelet[2650]: E0813 07:17:44.442682 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.442701 kubelet[2650]: W0813 07:17:44.442696 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.442797 kubelet[2650]: E0813 07:17:44.442707 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.442964 kubelet[2650]: E0813 07:17:44.442945 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.442964 kubelet[2650]: W0813 07:17:44.442958 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.443030 kubelet[2650]: E0813 07:17:44.442970 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.543777 kubelet[2650]: E0813 07:17:44.543621 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.543777 kubelet[2650]: W0813 07:17:44.543651 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.543777 kubelet[2650]: E0813 07:17:44.543676 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.544085 kubelet[2650]: E0813 07:17:44.544035 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.544085 kubelet[2650]: W0813 07:17:44.544069 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.544258 kubelet[2650]: E0813 07:17:44.544103 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.544505 kubelet[2650]: E0813 07:17:44.544486 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.544505 kubelet[2650]: W0813 07:17:44.544499 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.544723 kubelet[2650]: E0813 07:17:44.544512 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.544903 kubelet[2650]: E0813 07:17:44.544838 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.544903 kubelet[2650]: W0813 07:17:44.544870 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.545018 kubelet[2650]: E0813 07:17:44.544923 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.545140 kubelet[2650]: E0813 07:17:44.545124 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.545140 kubelet[2650]: W0813 07:17:44.545137 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.545140 kubelet[2650]: E0813 07:17:44.545154 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.545695 kubelet[2650]: E0813 07:17:44.545457 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.545695 kubelet[2650]: W0813 07:17:44.545469 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.545695 kubelet[2650]: E0813 07:17:44.545485 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.546254 kubelet[2650]: E0813 07:17:44.545932 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.546254 kubelet[2650]: W0813 07:17:44.545954 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.546254 kubelet[2650]: E0813 07:17:44.545982 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.546476 kubelet[2650]: E0813 07:17:44.546458 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.546547 kubelet[2650]: W0813 07:17:44.546531 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.546659 kubelet[2650]: E0813 07:17:44.546630 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.547114 kubelet[2650]: E0813 07:17:44.547084 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.547114 kubelet[2650]: W0813 07:17:44.547101 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.547114 kubelet[2650]: E0813 07:17:44.547175 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.547459 kubelet[2650]: E0813 07:17:44.547439 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.547496 kubelet[2650]: W0813 07:17:44.547458 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.547496 kubelet[2650]: E0813 07:17:44.547490 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.547766 kubelet[2650]: E0813 07:17:44.547748 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.547766 kubelet[2650]: W0813 07:17:44.547762 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.547842 kubelet[2650]: E0813 07:17:44.547791 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.548035 kubelet[2650]: E0813 07:17:44.548016 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.548035 kubelet[2650]: W0813 07:17:44.548031 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.548113 kubelet[2650]: E0813 07:17:44.548062 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.548372 kubelet[2650]: E0813 07:17:44.548346 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.548372 kubelet[2650]: W0813 07:17:44.548362 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.548372 kubelet[2650]: E0813 07:17:44.548379 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.548640 kubelet[2650]: E0813 07:17:44.548612 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.548640 kubelet[2650]: W0813 07:17:44.548629 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.548738 kubelet[2650]: E0813 07:17:44.548646 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.548919 kubelet[2650]: E0813 07:17:44.548900 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.548919 kubelet[2650]: W0813 07:17:44.548914 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.549014 kubelet[2650]: E0813 07:17:44.548946 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.549151 kubelet[2650]: E0813 07:17:44.549131 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.549151 kubelet[2650]: W0813 07:17:44.549145 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.549227 kubelet[2650]: E0813 07:17:44.549169 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.549392 kubelet[2650]: E0813 07:17:44.549372 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.549392 kubelet[2650]: W0813 07:17:44.549385 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.549482 kubelet[2650]: E0813 07:17:44.549409 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.549589 kubelet[2650]: E0813 07:17:44.549570 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.549589 kubelet[2650]: W0813 07:17:44.549582 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.549647 kubelet[2650]: E0813 07:17:44.549597 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.549810 kubelet[2650]: E0813 07:17:44.549793 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.549810 kubelet[2650]: W0813 07:17:44.549805 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.549947 kubelet[2650]: E0813 07:17:44.549827 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.550062 kubelet[2650]: E0813 07:17:44.550042 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.550062 kubelet[2650]: W0813 07:17:44.550055 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.550137 kubelet[2650]: E0813 07:17:44.550072 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.550322 kubelet[2650]: E0813 07:17:44.550299 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.550322 kubelet[2650]: W0813 07:17:44.550314 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.550391 kubelet[2650]: E0813 07:17:44.550338 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.550583 kubelet[2650]: E0813 07:17:44.550565 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.550583 kubelet[2650]: W0813 07:17:44.550576 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.550653 kubelet[2650]: E0813 07:17:44.550605 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.550817 kubelet[2650]: E0813 07:17:44.550799 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.550817 kubelet[2650]: W0813 07:17:44.550811 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.550934 kubelet[2650]: E0813 07:17:44.550844 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.551095 kubelet[2650]: E0813 07:17:44.551073 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.551095 kubelet[2650]: W0813 07:17:44.551089 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.551178 kubelet[2650]: E0813 07:17:44.551108 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.551418 kubelet[2650]: E0813 07:17:44.551400 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.551418 kubelet[2650]: W0813 07:17:44.551414 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.551418 kubelet[2650]: E0813 07:17:44.551425 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:44.560408 kubelet[2650]: E0813 07:17:44.560365 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:44.560408 kubelet[2650]: W0813 07:17:44.560395 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:44.560408 kubelet[2650]: E0813 07:17:44.560420 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:45.816519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508671488.mount: Deactivated successfully. Aug 13 07:17:46.171441 containerd[1574]: time="2025-08-13T07:17:46.171378535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:46.172364 containerd[1574]: time="2025-08-13T07:17:46.172321182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:17:46.173692 containerd[1574]: time="2025-08-13T07:17:46.173656271Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:46.176339 containerd[1574]: time="2025-08-13T07:17:46.176294202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:46.177280 containerd[1574]: time="2025-08-13T07:17:46.177076531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.512378356s" Aug 13 07:17:46.177280 containerd[1574]: time="2025-08-13T07:17:46.177117087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:17:46.178347 containerd[1574]: time="2025-08-13T07:17:46.178313181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:17:46.203063 containerd[1574]: time="2025-08-13T07:17:46.203017439Z" level=info msg="CreateContainer within sandbox \"db868d6f4a64ef96712346a964fdf9f6777b984ce871e124a6e558a83335612a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:17:46.216327 containerd[1574]: time="2025-08-13T07:17:46.216272361Z" level=info msg="CreateContainer within sandbox \"db868d6f4a64ef96712346a964fdf9f6777b984ce871e124a6e558a83335612a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"afb1dabaa91d443596d8d719b9e821b6dff9ef476353881c04f42cf246b2c93b\"" Aug 13 07:17:46.216990 containerd[1574]: time="2025-08-13T07:17:46.216818799Z" level=info msg="StartContainer for \"afb1dabaa91d443596d8d719b9e821b6dff9ef476353881c04f42cf246b2c93b\"" Aug 13 07:17:46.302735 containerd[1574]: time="2025-08-13T07:17:46.302682858Z" level=info msg="StartContainer for \"afb1dabaa91d443596d8d719b9e821b6dff9ef476353881c04f42cf246b2c93b\" returns successfully" Aug 13 07:17:46.624615 kubelet[2650]: E0813 07:17:46.624227 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:46.684534 kubelet[2650]: E0813 07:17:46.684499 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:46.760569 kubelet[2650]: E0813 07:17:46.760516 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.760569 kubelet[2650]: W0813 07:17:46.760547 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.760569 kubelet[2650]: E0813 07:17:46.760571 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.760984 kubelet[2650]: E0813 07:17:46.760970 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.760984 kubelet[2650]: W0813 07:17:46.760981 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.761052 kubelet[2650]: E0813 07:17:46.760991 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.761211 kubelet[2650]: E0813 07:17:46.761187 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.761211 kubelet[2650]: W0813 07:17:46.761199 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.761211 kubelet[2650]: E0813 07:17:46.761207 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.761413 kubelet[2650]: E0813 07:17:46.761399 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.761413 kubelet[2650]: W0813 07:17:46.761409 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.761462 kubelet[2650]: E0813 07:17:46.761418 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.761638 kubelet[2650]: E0813 07:17:46.761623 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.761638 kubelet[2650]: W0813 07:17:46.761633 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.761691 kubelet[2650]: E0813 07:17:46.761641 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.761850 kubelet[2650]: E0813 07:17:46.761837 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.761850 kubelet[2650]: W0813 07:17:46.761847 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.761927 kubelet[2650]: E0813 07:17:46.761855 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.762066 kubelet[2650]: E0813 07:17:46.762053 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.762066 kubelet[2650]: W0813 07:17:46.762063 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.762119 kubelet[2650]: E0813 07:17:46.762072 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.762270 kubelet[2650]: E0813 07:17:46.762257 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.762270 kubelet[2650]: W0813 07:17:46.762267 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.762319 kubelet[2650]: E0813 07:17:46.762275 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.762483 kubelet[2650]: E0813 07:17:46.762469 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.762483 kubelet[2650]: W0813 07:17:46.762480 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.762529 kubelet[2650]: E0813 07:17:46.762488 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.762679 kubelet[2650]: E0813 07:17:46.762664 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.762679 kubelet[2650]: W0813 07:17:46.762675 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.762731 kubelet[2650]: E0813 07:17:46.762685 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.762908 kubelet[2650]: E0813 07:17:46.762895 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.762908 kubelet[2650]: W0813 07:17:46.762905 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.762957 kubelet[2650]: E0813 07:17:46.762913 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.763119 kubelet[2650]: E0813 07:17:46.763105 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.763119 kubelet[2650]: W0813 07:17:46.763116 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.763161 kubelet[2650]: E0813 07:17:46.763124 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.763321 kubelet[2650]: E0813 07:17:46.763307 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.763321 kubelet[2650]: W0813 07:17:46.763318 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.763362 kubelet[2650]: E0813 07:17:46.763326 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.763518 kubelet[2650]: E0813 07:17:46.763506 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.763518 kubelet[2650]: W0813 07:17:46.763516 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.763568 kubelet[2650]: E0813 07:17:46.763524 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.763726 kubelet[2650]: E0813 07:17:46.763712 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.763726 kubelet[2650]: W0813 07:17:46.763723 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.763774 kubelet[2650]: E0813 07:17:46.763731 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.766275 kubelet[2650]: E0813 07:17:46.766242 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.766275 kubelet[2650]: W0813 07:17:46.766267 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.766326 kubelet[2650]: E0813 07:17:46.766291 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.766544 kubelet[2650]: E0813 07:17:46.766528 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.766544 kubelet[2650]: W0813 07:17:46.766540 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.766594 kubelet[2650]: E0813 07:17:46.766556 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.766831 kubelet[2650]: E0813 07:17:46.766810 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.766831 kubelet[2650]: W0813 07:17:46.766827 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.766918 kubelet[2650]: E0813 07:17:46.766843 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.767084 kubelet[2650]: E0813 07:17:46.767068 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.767084 kubelet[2650]: W0813 07:17:46.767080 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.767137 kubelet[2650]: E0813 07:17:46.767094 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.767305 kubelet[2650]: E0813 07:17:46.767282 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.767305 kubelet[2650]: W0813 07:17:46.767294 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.767350 kubelet[2650]: E0813 07:17:46.767307 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.767550 kubelet[2650]: E0813 07:17:46.767535 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.767550 kubelet[2650]: W0813 07:17:46.767546 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.767602 kubelet[2650]: E0813 07:17:46.767559 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.767839 kubelet[2650]: E0813 07:17:46.767811 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.767839 kubelet[2650]: W0813 07:17:46.767835 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.767922 kubelet[2650]: E0813 07:17:46.767851 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.768140 kubelet[2650]: E0813 07:17:46.768092 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.768140 kubelet[2650]: W0813 07:17:46.768106 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.768140 kubelet[2650]: E0813 07:17:46.768123 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.768368 kubelet[2650]: E0813 07:17:46.768337 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.768368 kubelet[2650]: W0813 07:17:46.768345 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.768368 kubelet[2650]: E0813 07:17:46.768360 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.768554 kubelet[2650]: E0813 07:17:46.768539 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.768554 kubelet[2650]: W0813 07:17:46.768550 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.768597 kubelet[2650]: E0813 07:17:46.768564 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.768938 kubelet[2650]: E0813 07:17:46.768903 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.768967 kubelet[2650]: W0813 07:17:46.768940 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.769011 kubelet[2650]: E0813 07:17:46.768990 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.769255 kubelet[2650]: E0813 07:17:46.769225 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.769255 kubelet[2650]: W0813 07:17:46.769238 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.769255 kubelet[2650]: E0813 07:17:46.769251 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.769474 kubelet[2650]: E0813 07:17:46.769459 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.769474 kubelet[2650]: W0813 07:17:46.769470 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.769517 kubelet[2650]: E0813 07:17:46.769485 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.769710 kubelet[2650]: E0813 07:17:46.769686 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.769710 kubelet[2650]: W0813 07:17:46.769698 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.769762 kubelet[2650]: E0813 07:17:46.769722 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.770018 kubelet[2650]: E0813 07:17:46.770002 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.770018 kubelet[2650]: W0813 07:17:46.770016 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.770067 kubelet[2650]: E0813 07:17:46.770031 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.770353 kubelet[2650]: E0813 07:17:46.770327 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.770353 kubelet[2650]: W0813 07:17:46.770342 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.770405 kubelet[2650]: E0813 07:17:46.770357 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.770582 kubelet[2650]: E0813 07:17:46.770564 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.770582 kubelet[2650]: W0813 07:17:46.770576 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.770625 kubelet[2650]: E0813 07:17:46.770585 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:46.770844 kubelet[2650]: E0813 07:17:46.770829 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:46.770844 kubelet[2650]: W0813 07:17:46.770841 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:46.770899 kubelet[2650]: E0813 07:17:46.770851 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.686347 kubelet[2650]: I0813 07:17:47.686305 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:47.686990 kubelet[2650]: E0813 07:17:47.686775 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:47.771221 kubelet[2650]: E0813 07:17:47.771175 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.771221 kubelet[2650]: W0813 07:17:47.771202 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.771221 kubelet[2650]: E0813 07:17:47.771224 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.771479 kubelet[2650]: E0813 07:17:47.771460 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.771479 kubelet[2650]: W0813 07:17:47.771468 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.771479 kubelet[2650]: E0813 07:17:47.771477 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.771826 kubelet[2650]: E0813 07:17:47.771792 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.771826 kubelet[2650]: W0813 07:17:47.771804 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.771826 kubelet[2650]: E0813 07:17:47.771813 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.772093 kubelet[2650]: E0813 07:17:47.772071 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.772093 kubelet[2650]: W0813 07:17:47.772083 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.772093 kubelet[2650]: E0813 07:17:47.772092 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.772360 kubelet[2650]: E0813 07:17:47.772331 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.772360 kubelet[2650]: W0813 07:17:47.772342 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.772360 kubelet[2650]: E0813 07:17:47.772352 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.772586 kubelet[2650]: E0813 07:17:47.772558 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.772586 kubelet[2650]: W0813 07:17:47.772571 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.772586 kubelet[2650]: E0813 07:17:47.772579 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.772794 kubelet[2650]: E0813 07:17:47.772774 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.772794 kubelet[2650]: W0813 07:17:47.772784 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.772794 kubelet[2650]: E0813 07:17:47.772792 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.773043 kubelet[2650]: E0813 07:17:47.773026 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.773043 kubelet[2650]: W0813 07:17:47.773037 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.773124 kubelet[2650]: E0813 07:17:47.773046 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.773283 kubelet[2650]: E0813 07:17:47.773267 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.773283 kubelet[2650]: W0813 07:17:47.773278 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.773355 kubelet[2650]: E0813 07:17:47.773286 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.773522 kubelet[2650]: E0813 07:17:47.773506 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.773522 kubelet[2650]: W0813 07:17:47.773516 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.773594 kubelet[2650]: E0813 07:17:47.773524 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.773752 kubelet[2650]: E0813 07:17:47.773736 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.773752 kubelet[2650]: W0813 07:17:47.773746 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.773822 kubelet[2650]: E0813 07:17:47.773753 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.774016 kubelet[2650]: E0813 07:17:47.774000 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.774016 kubelet[2650]: W0813 07:17:47.774011 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.774094 kubelet[2650]: E0813 07:17:47.774019 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.774245 kubelet[2650]: E0813 07:17:47.774229 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.774245 kubelet[2650]: W0813 07:17:47.774239 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.774245 kubelet[2650]: E0813 07:17:47.774247 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.774461 kubelet[2650]: E0813 07:17:47.774446 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.774461 kubelet[2650]: W0813 07:17:47.774456 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.774531 kubelet[2650]: E0813 07:17:47.774464 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.774660 kubelet[2650]: E0813 07:17:47.774645 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.774660 kubelet[2650]: W0813 07:17:47.774655 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.774729 kubelet[2650]: E0813 07:17:47.774663 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.776481 kubelet[2650]: E0813 07:17:47.776157 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.776481 kubelet[2650]: W0813 07:17:47.776184 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.776481 kubelet[2650]: E0813 07:17:47.776224 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.776613 kubelet[2650]: E0813 07:17:47.776507 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.776613 kubelet[2650]: W0813 07:17:47.776519 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.776613 kubelet[2650]: E0813 07:17:47.776541 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.777755 kubelet[2650]: E0813 07:17:47.776787 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.777755 kubelet[2650]: W0813 07:17:47.776798 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.777755 kubelet[2650]: E0813 07:17:47.776821 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.777755 kubelet[2650]: E0813 07:17:47.777108 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.777755 kubelet[2650]: W0813 07:17:47.777117 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.777755 kubelet[2650]: E0813 07:17:47.777129 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.777755 kubelet[2650]: E0813 07:17:47.777361 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.777755 kubelet[2650]: W0813 07:17:47.777371 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.777755 kubelet[2650]: E0813 07:17:47.777383 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.777755 kubelet[2650]: E0813 07:17:47.777714 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.778167 kubelet[2650]: W0813 07:17:47.777724 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.778167 kubelet[2650]: E0813 07:17:47.777770 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.778167 kubelet[2650]: E0813 07:17:47.778018 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.778167 kubelet[2650]: W0813 07:17:47.778029 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.778167 kubelet[2650]: E0813 07:17:47.778078 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.778329 kubelet[2650]: E0813 07:17:47.778302 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.778329 kubelet[2650]: W0813 07:17:47.778324 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.778373 kubelet[2650]: E0813 07:17:47.778354 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.778596 kubelet[2650]: E0813 07:17:47.778578 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.778596 kubelet[2650]: W0813 07:17:47.778589 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.778664 kubelet[2650]: E0813 07:17:47.778603 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.778938 kubelet[2650]: E0813 07:17:47.778902 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.778938 kubelet[2650]: W0813 07:17:47.778928 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.778938 kubelet[2650]: E0813 07:17:47.778943 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.779202 kubelet[2650]: E0813 07:17:47.779178 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.779202 kubelet[2650]: W0813 07:17:47.779193 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.779280 kubelet[2650]: E0813 07:17:47.779210 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.779487 kubelet[2650]: E0813 07:17:47.779472 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.779487 kubelet[2650]: W0813 07:17:47.779485 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.779553 kubelet[2650]: E0813 07:17:47.779502 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.779827 kubelet[2650]: E0813 07:17:47.779785 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.779827 kubelet[2650]: W0813 07:17:47.779812 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.779939 kubelet[2650]: E0813 07:17:47.779842 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.780117 kubelet[2650]: E0813 07:17:47.780090 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.780117 kubelet[2650]: W0813 07:17:47.780102 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.780191 kubelet[2650]: E0813 07:17:47.780133 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.780325 kubelet[2650]: E0813 07:17:47.780309 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.780325 kubelet[2650]: W0813 07:17:47.780320 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.780394 kubelet[2650]: E0813 07:17:47.780334 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.780588 kubelet[2650]: E0813 07:17:47.780572 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.780588 kubelet[2650]: W0813 07:17:47.780584 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.780644 kubelet[2650]: E0813 07:17:47.780599 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.780907 kubelet[2650]: E0813 07:17:47.780862 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.780907 kubelet[2650]: W0813 07:17:47.780895 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.780907 kubelet[2650]: E0813 07:17:47.780910 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:47.781179 kubelet[2650]: E0813 07:17:47.781151 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:47.781179 kubelet[2650]: W0813 07:17:47.781164 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:47.781179 kubelet[2650]: E0813 07:17:47.781172 2650 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:48.624260 kubelet[2650]: E0813 07:17:48.624172 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:49.347218 containerd[1574]: time="2025-08-13T07:17:49.347138140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:49.348281 containerd[1574]: time="2025-08-13T07:17:49.348213972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:17:49.349669 containerd[1574]: time="2025-08-13T07:17:49.349612480Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:49.352387 containerd[1574]: time="2025-08-13T07:17:49.352331913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:49.353098 containerd[1574]: time="2025-08-13T07:17:49.353036258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 3.174690939s" Aug 13 07:17:49.353098 containerd[1574]: time="2025-08-13T07:17:49.353091503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:17:49.355498 containerd[1574]: time="2025-08-13T07:17:49.355461536Z" level=info msg="CreateContainer within sandbox \"4352f40f5e7164f3fa668b71081b862fc056b787d573e52c0f43a89d3eaf0edb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:17:49.895221 containerd[1574]: time="2025-08-13T07:17:49.895140676Z" level=info msg="CreateContainer within sandbox \"4352f40f5e7164f3fa668b71081b862fc056b787d573e52c0f43a89d3eaf0edb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c14b4488e9a9248d95505de246d93118f3c62a793b57462d9bf923ceff4a9270\"" Aug 13 07:17:49.896270 containerd[1574]: time="2025-08-13T07:17:49.896206747Z" level=info msg="StartContainer for \"c14b4488e9a9248d95505de246d93118f3c62a793b57462d9bf923ceff4a9270\"" Aug 13 07:17:50.013167 containerd[1574]: time="2025-08-13T07:17:50.013099914Z" level=info msg="StartContainer for \"c14b4488e9a9248d95505de246d93118f3c62a793b57462d9bf923ceff4a9270\" returns successfully" Aug 13 07:17:50.034513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c14b4488e9a9248d95505de246d93118f3c62a793b57462d9bf923ceff4a9270-rootfs.mount: Deactivated successfully. Aug 13 07:17:50.429047 containerd[1574]: time="2025-08-13T07:17:50.426858276Z" level=info msg="shim disconnected" id=c14b4488e9a9248d95505de246d93118f3c62a793b57462d9bf923ceff4a9270 namespace=k8s.io Aug 13 07:17:50.429047 containerd[1574]: time="2025-08-13T07:17:50.429034405Z" level=warning msg="cleaning up after shim disconnected" id=c14b4488e9a9248d95505de246d93118f3c62a793b57462d9bf923ceff4a9270 namespace=k8s.io Aug 13 07:17:50.429047 containerd[1574]: time="2025-08-13T07:17:50.429048795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:17:50.625234 kubelet[2650]: E0813 07:17:50.625189 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:50.699486 containerd[1574]: time="2025-08-13T07:17:50.699331937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:17:50.848384 kubelet[2650]: I0813 07:17:50.848072 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f5c4c4d79-krg2m" podStartSLOduration=5.33430014 podStartE2EDuration="7.848053381s" podCreationTimestamp="2025-08-13 07:17:43 +0000 UTC" firstStartedPulling="2025-08-13 07:17:43.664346328 +0000 UTC m=+19.189415437" lastFinishedPulling="2025-08-13 07:17:46.178099559 +0000 UTC m=+21.703168678" observedRunningTime="2025-08-13 07:17:46.694032996 +0000 UTC m=+22.219102125" watchObservedRunningTime="2025-08-13 07:17:50.848053381 +0000 UTC m=+26.373122500" Aug 13 07:17:52.623582 kubelet[2650]: E0813 07:17:52.623531 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:54.623996 kubelet[2650]: E0813 07:17:54.623935 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:56.067224 systemd-resolved[1453]: Under memory pressure, flushing caches. Aug 13 07:17:56.067279 systemd-resolved[1453]: Flushed all caches. Aug 13 07:17:56.101943 systemd-journald[1159]: Under memory pressure, flushing caches. Aug 13 07:17:56.625600 kubelet[2650]: E0813 07:17:56.625502 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:56.767983 containerd[1574]: time="2025-08-13T07:17:56.767915752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:56.768788 containerd[1574]: time="2025-08-13T07:17:56.768744090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:17:56.769940 containerd[1574]: time="2025-08-13T07:17:56.769908016Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:56.772222 containerd[1574]: time="2025-08-13T07:17:56.772191998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:56.773014 containerd[1574]: time="2025-08-13T07:17:56.772982819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.073605419s" Aug 13 07:17:56.773083 containerd[1574]: time="2025-08-13T07:17:56.773017320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:17:56.778027 containerd[1574]: time="2025-08-13T07:17:56.777996857Z" level=info msg="CreateContainer within sandbox \"4352f40f5e7164f3fa668b71081b862fc056b787d573e52c0f43a89d3eaf0edb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:17:56.792768 containerd[1574]: time="2025-08-13T07:17:56.792719234Z" level=info msg="CreateContainer within sandbox \"4352f40f5e7164f3fa668b71081b862fc056b787d573e52c0f43a89d3eaf0edb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e93898865fada81855ac7377881bb41d7cfce0320ce8910053b7a2dcfdc7086e\"" Aug 13 07:17:56.793298 containerd[1574]: time="2025-08-13T07:17:56.793258970Z" level=info msg="StartContainer for \"e93898865fada81855ac7377881bb41d7cfce0320ce8910053b7a2dcfdc7086e\"" Aug 13 07:17:57.479808 containerd[1574]: time="2025-08-13T07:17:57.479743070Z" level=info msg="StartContainer for \"e93898865fada81855ac7377881bb41d7cfce0320ce8910053b7a2dcfdc7086e\" returns successfully" Aug 13 07:17:58.129995 systemd-journald[1159]: Under memory pressure, flushing caches. Aug 13 07:17:58.115071 systemd-resolved[1453]: Under memory pressure, flushing caches. Aug 13 07:17:58.115084 systemd-resolved[1453]: Flushed all caches. Aug 13 07:17:58.624227 kubelet[2650]: E0813 07:17:58.624176 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:17:59.240416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e93898865fada81855ac7377881bb41d7cfce0320ce8910053b7a2dcfdc7086e-rootfs.mount: Deactivated successfully. Aug 13 07:17:59.242266 containerd[1574]: time="2025-08-13T07:17:59.242186714Z" level=info msg="shim disconnected" id=e93898865fada81855ac7377881bb41d7cfce0320ce8910053b7a2dcfdc7086e namespace=k8s.io Aug 13 07:17:59.242266 containerd[1574]: time="2025-08-13T07:17:59.242249912Z" level=warning msg="cleaning up after shim disconnected" id=e93898865fada81855ac7377881bb41d7cfce0320ce8910053b7a2dcfdc7086e namespace=k8s.io Aug 13 07:17:59.242266 containerd[1574]: time="2025-08-13T07:17:59.242258881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:17:59.313057 kubelet[2650]: I0813 07:17:59.313018 2650 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:17:59.452497 kubelet[2650]: I0813 07:17:59.452446 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94d947c1-874c-414a-a146-eedc813ee768-whisker-backend-key-pair\") pod \"whisker-6c8c97fd-vpb85\" (UID: \"94d947c1-874c-414a-a146-eedc813ee768\") " pod="calico-system/whisker-6c8c97fd-vpb85" Aug 13 07:17:59.452497 kubelet[2650]: I0813 07:17:59.452498 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9ce3415-4567-4b4f-85b1-ab7682c65560-config-volume\") pod \"coredns-7c65d6cfc9-rttzq\" (UID: \"f9ce3415-4567-4b4f-85b1-ab7682c65560\") " pod="kube-system/coredns-7c65d6cfc9-rttzq" Aug 13 07:17:59.452497 kubelet[2650]: I0813 07:17:59.452523 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99eb19aa-3962-4a4e-90dc-6113f9d3975a-calico-apiserver-certs\") pod \"calico-apiserver-6dc6dbff5-vgmmt\" (UID: \"99eb19aa-3962-4a4e-90dc-6113f9d3975a\") " pod="calico-apiserver/calico-apiserver-6dc6dbff5-vgmmt" Aug 13 07:17:59.452856 kubelet[2650]: I0813 07:17:59.452544 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnjd\" (UniqueName: \"kubernetes.io/projected/94d947c1-874c-414a-a146-eedc813ee768-kube-api-access-rmnjd\") pod \"whisker-6c8c97fd-vpb85\" (UID: \"94d947c1-874c-414a-a146-eedc813ee768\") " pod="calico-system/whisker-6c8c97fd-vpb85" Aug 13 07:17:59.452856 kubelet[2650]: I0813 07:17:59.452601 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a60817e0-e119-4674-adab-2cc042d34e82-goldmane-key-pair\") pod \"goldmane-58fd7646b9-l9xx9\" (UID: \"a60817e0-e119-4674-adab-2cc042d34e82\") " pod="calico-system/goldmane-58fd7646b9-l9xx9" Aug 13 07:17:59.452856 kubelet[2650]: I0813 07:17:59.452628 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af09357e-c282-465e-8e3e-c2975907b447-config-volume\") pod \"coredns-7c65d6cfc9-mlvdk\" (UID: \"af09357e-c282-465e-8e3e-c2975907b447\") " pod="kube-system/coredns-7c65d6cfc9-mlvdk" Aug 13 07:17:59.452856 kubelet[2650]: I0813 07:17:59.452648 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblcn\" (UniqueName: \"kubernetes.io/projected/f9ce3415-4567-4b4f-85b1-ab7682c65560-kube-api-access-qblcn\") pod \"coredns-7c65d6cfc9-rttzq\" (UID: \"f9ce3415-4567-4b4f-85b1-ab7682c65560\") " pod="kube-system/coredns-7c65d6cfc9-rttzq" Aug 13 07:17:59.452856 kubelet[2650]: I0813 07:17:59.452663 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfsg2\" (UniqueName: \"kubernetes.io/projected/230a398e-7dc1-4ab4-8443-e6fef0e021f2-kube-api-access-jfsg2\") pod \"calico-apiserver-6dc6dbff5-69nqm\" (UID: \"230a398e-7dc1-4ab4-8443-e6fef0e021f2\") " pod="calico-apiserver/calico-apiserver-6dc6dbff5-69nqm" Aug 13 07:17:59.453014 kubelet[2650]: I0813 07:17:59.452685 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c443559-5d69-4d9a-86e1-d2701af11811-tigera-ca-bundle\") pod \"calico-kube-controllers-68bb7cfb49-tpxsq\" (UID: \"5c443559-5d69-4d9a-86e1-d2701af11811\") " pod="calico-system/calico-kube-controllers-68bb7cfb49-tpxsq" Aug 13 07:17:59.453014 kubelet[2650]: I0813 07:17:59.452700 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a60817e0-e119-4674-adab-2cc042d34e82-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-l9xx9\" (UID: \"a60817e0-e119-4674-adab-2cc042d34e82\") " pod="calico-system/goldmane-58fd7646b9-l9xx9" Aug 13 07:17:59.453014 kubelet[2650]: I0813 07:17:59.452714 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hgwz\" (UniqueName: \"kubernetes.io/projected/5c443559-5d69-4d9a-86e1-d2701af11811-kube-api-access-2hgwz\") pod \"calico-kube-controllers-68bb7cfb49-tpxsq\" (UID: \"5c443559-5d69-4d9a-86e1-d2701af11811\") " pod="calico-system/calico-kube-controllers-68bb7cfb49-tpxsq" Aug 13 07:17:59.453014 kubelet[2650]: I0813 07:17:59.452731 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d947c1-874c-414a-a146-eedc813ee768-whisker-ca-bundle\") pod \"whisker-6c8c97fd-vpb85\" (UID: \"94d947c1-874c-414a-a146-eedc813ee768\") " pod="calico-system/whisker-6c8c97fd-vpb85" Aug 13 07:17:59.453014 kubelet[2650]: I0813 07:17:59.452747 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60817e0-e119-4674-adab-2cc042d34e82-config\") pod \"goldmane-58fd7646b9-l9xx9\" (UID: \"a60817e0-e119-4674-adab-2cc042d34e82\") " pod="calico-system/goldmane-58fd7646b9-l9xx9" Aug 13 07:17:59.453138 kubelet[2650]: I0813 07:17:59.452761 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvj7v\" (UniqueName: \"kubernetes.io/projected/a60817e0-e119-4674-adab-2cc042d34e82-kube-api-access-mvj7v\") pod \"goldmane-58fd7646b9-l9xx9\" (UID: \"a60817e0-e119-4674-adab-2cc042d34e82\") " pod="calico-system/goldmane-58fd7646b9-l9xx9" Aug 13 07:17:59.453138 kubelet[2650]: I0813 07:17:59.452776 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/230a398e-7dc1-4ab4-8443-e6fef0e021f2-calico-apiserver-certs\") pod \"calico-apiserver-6dc6dbff5-69nqm\" (UID: \"230a398e-7dc1-4ab4-8443-e6fef0e021f2\") " pod="calico-apiserver/calico-apiserver-6dc6dbff5-69nqm" Aug 13 07:17:59.453138 kubelet[2650]: I0813 07:17:59.452791 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5vt\" (UniqueName: \"kubernetes.io/projected/af09357e-c282-465e-8e3e-c2975907b447-kube-api-access-cq5vt\") pod \"coredns-7c65d6cfc9-mlvdk\" (UID: \"af09357e-c282-465e-8e3e-c2975907b447\") " pod="kube-system/coredns-7c65d6cfc9-mlvdk" Aug 13 07:17:59.453138 kubelet[2650]: I0813 07:17:59.452810 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bql\" (UniqueName: \"kubernetes.io/projected/99eb19aa-3962-4a4e-90dc-6113f9d3975a-kube-api-access-s6bql\") pod \"calico-apiserver-6dc6dbff5-vgmmt\" (UID: \"99eb19aa-3962-4a4e-90dc-6113f9d3975a\") " pod="calico-apiserver/calico-apiserver-6dc6dbff5-vgmmt" Aug 13 07:17:59.642289 containerd[1574]: time="2025-08-13T07:17:59.642234085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb7cfb49-tpxsq,Uid:5c443559-5d69-4d9a-86e1-d2701af11811,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:59.655624 kubelet[2650]: E0813 07:17:59.655595 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:59.656110 containerd[1574]: time="2025-08-13T07:17:59.655970756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rttzq,Uid:f9ce3415-4567-4b4f-85b1-ab7682c65560,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:59.659595 containerd[1574]: time="2025-08-13T07:17:59.659558673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-l9xx9,Uid:a60817e0-e119-4674-adab-2cc042d34e82,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:59.663504 containerd[1574]: time="2025-08-13T07:17:59.663464827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-69nqm,Uid:230a398e-7dc1-4ab4-8443-e6fef0e021f2,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:17:59.667801 kubelet[2650]: E0813 07:17:59.667775 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:59.668176 containerd[1574]: time="2025-08-13T07:17:59.668144305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mlvdk,Uid:af09357e-c282-465e-8e3e-c2975907b447,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:59.672928 containerd[1574]: time="2025-08-13T07:17:59.672866641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-vgmmt,Uid:99eb19aa-3962-4a4e-90dc-6113f9d3975a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:17:59.687772 containerd[1574]: time="2025-08-13T07:17:59.687722982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c8c97fd-vpb85,Uid:94d947c1-874c-414a-a146-eedc813ee768,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:59.727799 containerd[1574]: time="2025-08-13T07:17:59.727746731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:17:59.923845 containerd[1574]: time="2025-08-13T07:17:59.923672036Z" level=error msg="Failed to destroy network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:59.924168 containerd[1574]: time="2025-08-13T07:17:59.924137003Z" level=error msg="encountered an error cleaning up failed sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:59.924222 containerd[1574]: time="2025-08-13T07:17:59.924196855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb7cfb49-tpxsq,Uid:5c443559-5d69-4d9a-86e1-d2701af11811,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:59.934273 kubelet[2650]: E0813 07:17:59.934219 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:59.934402 kubelet[2650]: E0813 07:17:59.934308 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bb7cfb49-tpxsq" Aug 13 07:17:59.934402 kubelet[2650]: E0813 07:17:59.934334 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bb7cfb49-tpxsq" Aug 13 07:17:59.934402 kubelet[2650]: E0813 07:17:59.934385 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68bb7cfb49-tpxsq_calico-system(5c443559-5d69-4d9a-86e1-d2701af11811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68bb7cfb49-tpxsq_calico-system(5c443559-5d69-4d9a-86e1-d2701af11811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bb7cfb49-tpxsq" podUID="5c443559-5d69-4d9a-86e1-d2701af11811" Aug 13 07:18:00.173195 containerd[1574]: time="2025-08-13T07:18:00.173132328Z" level=error msg="Failed to destroy network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.174454 containerd[1574]: time="2025-08-13T07:18:00.174077679Z" level=error msg="encountered an error cleaning up failed sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.174598 containerd[1574]: time="2025-08-13T07:18:00.174532013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rttzq,Uid:f9ce3415-4567-4b4f-85b1-ab7682c65560,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.175017 kubelet[2650]: E0813 07:18:00.174848 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.175216 kubelet[2650]: E0813 07:18:00.175041 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rttzq" Aug 13 07:18:00.175216 kubelet[2650]: E0813 07:18:00.175062 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rttzq" Aug 13 07:18:00.175216 kubelet[2650]: E0813 07:18:00.175130 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rttzq_kube-system(f9ce3415-4567-4b4f-85b1-ab7682c65560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rttzq_kube-system(f9ce3415-4567-4b4f-85b1-ab7682c65560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rttzq" podUID="f9ce3415-4567-4b4f-85b1-ab7682c65560" Aug 13 07:18:00.197380 containerd[1574]: time="2025-08-13T07:18:00.197325065Z" level=error msg="Failed to destroy network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.198684 containerd[1574]: time="2025-08-13T07:18:00.197976089Z" level=error msg="encountered an error cleaning up failed sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.198684 containerd[1574]: time="2025-08-13T07:18:00.198024496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-vgmmt,Uid:99eb19aa-3962-4a4e-90dc-6113f9d3975a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.198812 kubelet[2650]: E0813 07:18:00.198289 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.198812 kubelet[2650]: E0813 07:18:00.198348 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc6dbff5-vgmmt" Aug 13 07:18:00.198812 kubelet[2650]: E0813 07:18:00.198367 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc6dbff5-vgmmt" Aug 13 07:18:00.198928 kubelet[2650]: E0813 07:18:00.198420 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc6dbff5-vgmmt_calico-apiserver(99eb19aa-3962-4a4e-90dc-6113f9d3975a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc6dbff5-vgmmt_calico-apiserver(99eb19aa-3962-4a4e-90dc-6113f9d3975a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc6dbff5-vgmmt" podUID="99eb19aa-3962-4a4e-90dc-6113f9d3975a" Aug 13 07:18:00.201995 containerd[1574]: time="2025-08-13T07:18:00.201742983Z" level=error msg="Failed to destroy network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.202258 containerd[1574]: time="2025-08-13T07:18:00.202201766Z" level=error msg="Failed to destroy network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.203005 containerd[1574]: time="2025-08-13T07:18:00.202961922Z" level=error msg="encountered an error cleaning up failed sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.203176 containerd[1574]: time="2025-08-13T07:18:00.203018947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mlvdk,Uid:af09357e-c282-465e-8e3e-c2975907b447,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.203282 kubelet[2650]: E0813 07:18:00.203255 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.203350 kubelet[2650]: E0813 07:18:00.203291 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mlvdk" Aug 13 07:18:00.203350 kubelet[2650]: E0813 07:18:00.203335 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mlvdk" Aug 13 07:18:00.203697 kubelet[2650]: E0813 07:18:00.203374 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mlvdk_kube-system(af09357e-c282-465e-8e3e-c2975907b447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mlvdk_kube-system(af09357e-c282-465e-8e3e-c2975907b447)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mlvdk" podUID="af09357e-c282-465e-8e3e-c2975907b447" Aug 13 07:18:00.203791 containerd[1574]: time="2025-08-13T07:18:00.203439001Z" level=error msg="encountered an error cleaning up failed sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.203791 containerd[1574]: time="2025-08-13T07:18:00.203559075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-l9xx9,Uid:a60817e0-e119-4674-adab-2cc042d34e82,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.203869 kubelet[2650]: E0813 07:18:00.203753 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.203869 kubelet[2650]: E0813 07:18:00.203790 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-l9xx9" Aug 13 07:18:00.203869 kubelet[2650]: E0813 07:18:00.203813 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-l9xx9" Aug 13 07:18:00.204044 kubelet[2650]: E0813 07:18:00.203848 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-l9xx9_calico-system(a60817e0-e119-4674-adab-2cc042d34e82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-l9xx9_calico-system(a60817e0-e119-4674-adab-2cc042d34e82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-l9xx9" podUID="a60817e0-e119-4674-adab-2cc042d34e82" Aug 13 07:18:00.219081 containerd[1574]: time="2025-08-13T07:18:00.219003623Z" level=error msg="Failed to destroy network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.219456 containerd[1574]: time="2025-08-13T07:18:00.219420951Z" level=error msg="encountered an error cleaning up failed sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.228073 containerd[1574]: time="2025-08-13T07:18:00.228003333Z" level=error msg="Failed to destroy network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.228493 containerd[1574]: time="2025-08-13T07:18:00.228441984Z" level=error msg="encountered an error cleaning up failed sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.228493 containerd[1574]: time="2025-08-13T07:18:00.228495614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c8c97fd-vpb85,Uid:94d947c1-874c-414a-a146-eedc813ee768,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.228766 kubelet[2650]: E0813 07:18:00.228726 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.228831 kubelet[2650]: E0813 07:18:00.228789 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c8c97fd-vpb85" Aug 13 07:18:00.228831 kubelet[2650]: E0813 07:18:00.228808 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c8c97fd-vpb85" Aug 13 07:18:00.228898 kubelet[2650]: E0813 07:18:00.228855 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c8c97fd-vpb85_calico-system(94d947c1-874c-414a-a146-eedc813ee768)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c8c97fd-vpb85_calico-system(94d947c1-874c-414a-a146-eedc813ee768)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c8c97fd-vpb85" podUID="94d947c1-874c-414a-a146-eedc813ee768" Aug 13 07:18:00.230939 containerd[1574]: time="2025-08-13T07:18:00.230862183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-69nqm,Uid:230a398e-7dc1-4ab4-8443-e6fef0e021f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.231250 kubelet[2650]: E0813 07:18:00.231201 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.231308 kubelet[2650]: E0813 07:18:00.231277 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc6dbff5-69nqm" Aug 13 07:18:00.231364 kubelet[2650]: E0813 07:18:00.231303 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc6dbff5-69nqm" Aug 13 07:18:00.231480 kubelet[2650]: E0813 07:18:00.231405 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc6dbff5-69nqm_calico-apiserver(230a398e-7dc1-4ab4-8443-e6fef0e021f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc6dbff5-69nqm_calico-apiserver(230a398e-7dc1-4ab4-8443-e6fef0e021f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc6dbff5-69nqm" podUID="230a398e-7dc1-4ab4-8443-e6fef0e021f2" Aug 13 07:18:00.627635 containerd[1574]: time="2025-08-13T07:18:00.627583733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b99jc,Uid:37049b96-5b1d-4b14-aa39-fa916253ae4c,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:00.700156 containerd[1574]: time="2025-08-13T07:18:00.700083695Z" level=error msg="Failed to destroy network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.700522 containerd[1574]: time="2025-08-13T07:18:00.700492946Z" level=error msg="encountered an error cleaning up failed sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.700572 containerd[1574]: time="2025-08-13T07:18:00.700541605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b99jc,Uid:37049b96-5b1d-4b14-aa39-fa916253ae4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.700846 kubelet[2650]: E0813 07:18:00.700800 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.701559 kubelet[2650]: E0813 07:18:00.700867 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b99jc" Aug 13 07:18:00.701559 kubelet[2650]: E0813 07:18:00.700904 2650 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b99jc" Aug 13 07:18:00.701559 kubelet[2650]: E0813 07:18:00.700951 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b99jc_calico-system(37049b96-5b1d-4b14-aa39-fa916253ae4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b99jc_calico-system(37049b96-5b1d-4b14-aa39-fa916253ae4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:18:00.702983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656-shm.mount: Deactivated successfully. Aug 13 07:18:00.729490 kubelet[2650]: I0813 07:18:00.729453 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:00.731509 kubelet[2650]: I0813 07:18:00.731474 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:00.732852 kubelet[2650]: I0813 07:18:00.732831 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:00.734821 kubelet[2650]: I0813 07:18:00.734338 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:00.761846 containerd[1574]: time="2025-08-13T07:18:00.761042291Z" level=info msg="StopPodSandbox for \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\"" Aug 13 07:18:00.761846 containerd[1574]: time="2025-08-13T07:18:00.761310256Z" level=info msg="StopPodSandbox for \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\"" Aug 13 07:18:00.762181 containerd[1574]: time="2025-08-13T07:18:00.762151005Z" level=info msg="StopPodSandbox for \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\"" Aug 13 07:18:00.762867 containerd[1574]: time="2025-08-13T07:18:00.762790324Z" level=info msg="Ensure that sandbox 8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f in task-service has been cleanup successfully" Aug 13 07:18:00.762867 containerd[1574]: time="2025-08-13T07:18:00.762815115Z" level=info msg="Ensure that sandbox b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649 in task-service has been cleanup successfully" Aug 13 07:18:00.763128 containerd[1574]: time="2025-08-13T07:18:00.762821578Z" level=info msg="StopPodSandbox for \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\"" Aug 13 07:18:00.763169 kubelet[2650]: I0813 07:18:00.762962 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:00.763274 containerd[1574]: time="2025-08-13T07:18:00.763226500Z" level=info msg="Ensure that sandbox 7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba in task-service has been cleanup successfully" Aug 13 07:18:00.763812 containerd[1574]: time="2025-08-13T07:18:00.763766538Z" level=info msg="Ensure that sandbox 0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2 in task-service has been cleanup successfully" Aug 13 07:18:00.769901 containerd[1574]: time="2025-08-13T07:18:00.769829409Z" level=info msg="StopPodSandbox for \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\"" Aug 13 07:18:00.770582 containerd[1574]: time="2025-08-13T07:18:00.770241607Z" level=info msg="Ensure that sandbox c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789 in task-service has been cleanup successfully" Aug 13 07:18:00.772188 kubelet[2650]: I0813 07:18:00.772157 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:00.774808 containerd[1574]: time="2025-08-13T07:18:00.774758307Z" level=info msg="StopPodSandbox for \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\"" Aug 13 07:18:00.775105 containerd[1574]: time="2025-08-13T07:18:00.775071072Z" level=info msg="Ensure that sandbox 50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047 in task-service has been cleanup successfully" Aug 13 07:18:00.778998 kubelet[2650]: I0813 07:18:00.778971 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:00.780853 containerd[1574]: time="2025-08-13T07:18:00.780822381Z" level=info msg="StopPodSandbox for \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\"" Aug 13 07:18:00.781198 containerd[1574]: time="2025-08-13T07:18:00.781178184Z" level=info msg="Ensure that sandbox 803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441 in task-service has been cleanup successfully" Aug 13 07:18:00.782023 kubelet[2650]: I0813 07:18:00.781999 2650 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:00.782998 containerd[1574]: time="2025-08-13T07:18:00.782544380Z" level=info msg="StopPodSandbox for \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\"" Aug 13 07:18:00.782998 containerd[1574]: time="2025-08-13T07:18:00.782698323Z" level=info msg="Ensure that sandbox c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656 in task-service has been cleanup successfully" Aug 13 07:18:00.824688 containerd[1574]: time="2025-08-13T07:18:00.824618568Z" level=error msg="StopPodSandbox for \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\" failed" error="failed to destroy network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.824968 kubelet[2650]: E0813 07:18:00.824919 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:00.825067 kubelet[2650]: E0813 07:18:00.824988 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2"} Aug 13 07:18:00.825067 kubelet[2650]: E0813 07:18:00.825057 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af09357e-c282-465e-8e3e-c2975907b447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.825155 kubelet[2650]: E0813 07:18:00.825080 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af09357e-c282-465e-8e3e-c2975907b447\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mlvdk" podUID="af09357e-c282-465e-8e3e-c2975907b447" Aug 13 07:18:00.841328 containerd[1574]: time="2025-08-13T07:18:00.841263195Z" level=error msg="StopPodSandbox for \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\" failed" error="failed to destroy network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.841638 kubelet[2650]: E0813 07:18:00.841576 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:00.841695 kubelet[2650]: E0813 07:18:00.841633 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441"} Aug 13 07:18:00.841695 kubelet[2650]: E0813 07:18:00.841669 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c443559-5d69-4d9a-86e1-d2701af11811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.841788 kubelet[2650]: E0813 07:18:00.841714 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c443559-5d69-4d9a-86e1-d2701af11811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bb7cfb49-tpxsq" podUID="5c443559-5d69-4d9a-86e1-d2701af11811" Aug 13 07:18:00.842017 containerd[1574]: time="2025-08-13T07:18:00.841974270Z" level=error msg="StopPodSandbox for \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\" failed" error="failed to destroy network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.843125 kubelet[2650]: E0813 07:18:00.843098 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:00.843182 kubelet[2650]: E0813 07:18:00.843125 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789"} Aug 13 07:18:00.843182 kubelet[2650]: E0813 07:18:00.843149 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9ce3415-4567-4b4f-85b1-ab7682c65560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.843182 kubelet[2650]: E0813 07:18:00.843165 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9ce3415-4567-4b4f-85b1-ab7682c65560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rttzq" podUID="f9ce3415-4567-4b4f-85b1-ab7682c65560" Aug 13 07:18:00.845815 containerd[1574]: time="2025-08-13T07:18:00.845765415Z" level=error msg="StopPodSandbox for \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\" failed" error="failed to destroy network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.845936 containerd[1574]: time="2025-08-13T07:18:00.845777239Z" level=error msg="StopPodSandbox for \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\" failed" error="failed to destroy network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.846275 kubelet[2650]: E0813 07:18:00.846231 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:00.846275 kubelet[2650]: E0813 07:18:00.846260 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649"} Aug 13 07:18:00.846275 kubelet[2650]: E0813 07:18:00.846282 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99eb19aa-3962-4a4e-90dc-6113f9d3975a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.846516 kubelet[2650]: E0813 07:18:00.846311 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99eb19aa-3962-4a4e-90dc-6113f9d3975a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc6dbff5-vgmmt" podUID="99eb19aa-3962-4a4e-90dc-6113f9d3975a" Aug 13 07:18:00.846516 kubelet[2650]: E0813 07:18:00.846331 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:00.846516 kubelet[2650]: E0813 07:18:00.846345 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f"} Aug 13 07:18:00.846516 kubelet[2650]: E0813 07:18:00.846360 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"230a398e-7dc1-4ab4-8443-e6fef0e021f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.846639 kubelet[2650]: E0813 07:18:00.846374 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"230a398e-7dc1-4ab4-8443-e6fef0e021f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc6dbff5-69nqm" podUID="230a398e-7dc1-4ab4-8443-e6fef0e021f2" Aug 13 07:18:00.848257 containerd[1574]: time="2025-08-13T07:18:00.848224923Z" level=error msg="StopPodSandbox for \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\" failed" error="failed to destroy network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.848510 kubelet[2650]: E0813 07:18:00.848469 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:00.848510 kubelet[2650]: E0813 07:18:00.848497 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba"} Aug 13 07:18:00.848694 kubelet[2650]: E0813 07:18:00.848530 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94d947c1-874c-414a-a146-eedc813ee768\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.848694 kubelet[2650]: E0813 07:18:00.848545 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94d947c1-874c-414a-a146-eedc813ee768\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c8c97fd-vpb85" podUID="94d947c1-874c-414a-a146-eedc813ee768" Aug 13 07:18:00.849064 containerd[1574]: time="2025-08-13T07:18:00.848818149Z" level=error msg="StopPodSandbox for \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\" failed" error="failed to destroy network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.849114 kubelet[2650]: E0813 07:18:00.849059 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:00.849152 kubelet[2650]: E0813 07:18:00.849119 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047"} Aug 13 07:18:00.849183 kubelet[2650]: E0813 07:18:00.849153 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a60817e0-e119-4674-adab-2cc042d34e82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.849230 kubelet[2650]: E0813 07:18:00.849177 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a60817e0-e119-4674-adab-2cc042d34e82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-l9xx9" podUID="a60817e0-e119-4674-adab-2cc042d34e82" Aug 13 07:18:00.854707 containerd[1574]: time="2025-08-13T07:18:00.854656995Z" level=error msg="StopPodSandbox for \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\" failed" error="failed to destroy network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:18:00.854929 kubelet[2650]: E0813 07:18:00.854894 2650 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:00.854985 kubelet[2650]: E0813 07:18:00.854944 2650 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656"} Aug 13 07:18:00.855019 kubelet[2650]: E0813 07:18:00.854979 2650 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37049b96-5b1d-4b14-aa39-fa916253ae4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:18:00.855078 kubelet[2650]: E0813 07:18:00.855013 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37049b96-5b1d-4b14-aa39-fa916253ae4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b99jc" podUID="37049b96-5b1d-4b14-aa39-fa916253ae4c" Aug 13 07:18:03.800067 kubelet[2650]: I0813 07:18:03.800027 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:03.800660 kubelet[2650]: E0813 07:18:03.800416 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:04.668142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755868710.mount: Deactivated successfully. Aug 13 07:18:04.790689 kubelet[2650]: E0813 07:18:04.790642 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:07.226003 containerd[1574]: time="2025-08-13T07:18:07.225915110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:07.226842 containerd[1574]: time="2025-08-13T07:18:07.226803186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:18:07.228070 containerd[1574]: time="2025-08-13T07:18:07.228041595Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:07.230272 containerd[1574]: time="2025-08-13T07:18:07.230206729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:07.230828 containerd[1574]: time="2025-08-13T07:18:07.230795553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.503004292s" Aug 13 07:18:07.230828 containerd[1574]: time="2025-08-13T07:18:07.230824902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:18:07.240914 containerd[1574]: time="2025-08-13T07:18:07.240821753Z" level=info msg="CreateContainer within sandbox \"4352f40f5e7164f3fa668b71081b862fc056b787d573e52c0f43a89d3eaf0edb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:18:07.260539 containerd[1574]: time="2025-08-13T07:18:07.260473989Z" level=info msg="CreateContainer within sandbox \"4352f40f5e7164f3fa668b71081b862fc056b787d573e52c0f43a89d3eaf0edb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7b61f1a2c398b3b67bf30b13899c8d93f9d58dbb306deeea19848cc7cb7c8a9\"" Aug 13 07:18:07.260950 containerd[1574]: time="2025-08-13T07:18:07.260923362Z" level=info msg="StartContainer for \"b7b61f1a2c398b3b67bf30b13899c8d93f9d58dbb306deeea19848cc7cb7c8a9\"" Aug 13 07:18:07.465620 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:18:07.465769 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:18:07.473184 containerd[1574]: time="2025-08-13T07:18:07.473138514Z" level=info msg="StartContainer for \"b7b61f1a2c398b3b67bf30b13899c8d93f9d58dbb306deeea19848cc7cb7c8a9\" returns successfully" Aug 13 07:18:07.903478 containerd[1574]: time="2025-08-13T07:18:07.902012753Z" level=info msg="StopPodSandbox for \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\"" Aug 13 07:18:08.280803 kubelet[2650]: I0813 07:18:08.280210 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tjp8s" podStartSLOduration=2.380536533 podStartE2EDuration="25.280189373s" podCreationTimestamp="2025-08-13 07:17:43 +0000 UTC" firstStartedPulling="2025-08-13 07:17:44.332087352 +0000 UTC m=+19.857156471" lastFinishedPulling="2025-08-13 07:18:07.231740192 +0000 UTC m=+42.756809311" observedRunningTime="2025-08-13 07:18:07.917575256 +0000 UTC m=+43.442644375" watchObservedRunningTime="2025-08-13 07:18:08.280189373 +0000 UTC m=+43.805258512" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.279 [INFO][4002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.279 [INFO][4002] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" iface="eth0" netns="/var/run/netns/cni-767f908d-93aa-a9e6-21ee-a2a3f402c4b1" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.280 [INFO][4002] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" iface="eth0" netns="/var/run/netns/cni-767f908d-93aa-a9e6-21ee-a2a3f402c4b1" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.281 [INFO][4002] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" iface="eth0" netns="/var/run/netns/cni-767f908d-93aa-a9e6-21ee-a2a3f402c4b1" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.281 [INFO][4002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.281 [INFO][4002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.350 [INFO][4013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.351 [INFO][4013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.351 [INFO][4013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.479 [WARNING][4013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.480 [INFO][4013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.481 [INFO][4013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:08.487621 containerd[1574]: 2025-08-13 07:18:08.484 [INFO][4002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:08.488417 containerd[1574]: time="2025-08-13T07:18:08.487858534Z" level=info msg="TearDown network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\" successfully" Aug 13 07:18:08.488417 containerd[1574]: time="2025-08-13T07:18:08.487925729Z" level=info msg="StopPodSandbox for \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\" returns successfully" Aug 13 07:18:08.492139 systemd[1]: run-netns-cni\x2d767f908d\x2d93aa\x2da9e6\x2d21ee\x2da2a3f402c4b1.mount: Deactivated successfully. Aug 13 07:18:08.511518 kubelet[2650]: I0813 07:18:08.511463 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d947c1-874c-414a-a146-eedc813ee768-whisker-ca-bundle\") pod \"94d947c1-874c-414a-a146-eedc813ee768\" (UID: \"94d947c1-874c-414a-a146-eedc813ee768\") " Aug 13 07:18:08.511518 kubelet[2650]: I0813 07:18:08.511519 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94d947c1-874c-414a-a146-eedc813ee768-whisker-backend-key-pair\") pod \"94d947c1-874c-414a-a146-eedc813ee768\" (UID: \"94d947c1-874c-414a-a146-eedc813ee768\") " Aug 13 07:18:08.511518 kubelet[2650]: I0813 07:18:08.511543 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmnjd\" (UniqueName: \"kubernetes.io/projected/94d947c1-874c-414a-a146-eedc813ee768-kube-api-access-rmnjd\") pod \"94d947c1-874c-414a-a146-eedc813ee768\" (UID: \"94d947c1-874c-414a-a146-eedc813ee768\") " Aug 13 07:18:08.512355 kubelet[2650]: I0813 07:18:08.512160 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d947c1-874c-414a-a146-eedc813ee768-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "94d947c1-874c-414a-a146-eedc813ee768" (UID: "94d947c1-874c-414a-a146-eedc813ee768"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:18:08.515520 kubelet[2650]: I0813 07:18:08.515460 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d947c1-874c-414a-a146-eedc813ee768-kube-api-access-rmnjd" (OuterVolumeSpecName: "kube-api-access-rmnjd") pod "94d947c1-874c-414a-a146-eedc813ee768" (UID: "94d947c1-874c-414a-a146-eedc813ee768"). InnerVolumeSpecName "kube-api-access-rmnjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:18:08.517409 kubelet[2650]: I0813 07:18:08.517359 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d947c1-874c-414a-a146-eedc813ee768-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "94d947c1-874c-414a-a146-eedc813ee768" (UID: "94d947c1-874c-414a-a146-eedc813ee768"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:18:08.519045 systemd[1]: var-lib-kubelet-pods-94d947c1\x2d874c\x2d414a\x2da146\x2deedc813ee768-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmnjd.mount: Deactivated successfully. Aug 13 07:18:08.522347 systemd[1]: var-lib-kubelet-pods-94d947c1\x2d874c\x2d414a\x2da146\x2deedc813ee768-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:18:08.612076 kubelet[2650]: I0813 07:18:08.612034 2650 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d947c1-874c-414a-a146-eedc813ee768-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 07:18:08.612076 kubelet[2650]: I0813 07:18:08.612070 2650 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94d947c1-874c-414a-a146-eedc813ee768-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 07:18:08.612076 kubelet[2650]: I0813 07:18:08.612080 2650 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmnjd\" (UniqueName: \"kubernetes.io/projected/94d947c1-874c-414a-a146-eedc813ee768-kube-api-access-rmnjd\") on node \"localhost\" DevicePath \"\"" Aug 13 07:18:09.217247 kubelet[2650]: I0813 07:18:09.217209 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d85811d-ab52-44d5-bf4e-1668ec1e85b0-whisker-ca-bundle\") pod \"whisker-75444cb96b-vgppg\" (UID: \"9d85811d-ab52-44d5-bf4e-1668ec1e85b0\") " pod="calico-system/whisker-75444cb96b-vgppg" Aug 13 07:18:09.217247 kubelet[2650]: I0813 07:18:09.217253 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d85811d-ab52-44d5-bf4e-1668ec1e85b0-whisker-backend-key-pair\") pod \"whisker-75444cb96b-vgppg\" (UID: \"9d85811d-ab52-44d5-bf4e-1668ec1e85b0\") " pod="calico-system/whisker-75444cb96b-vgppg" Aug 13 07:18:09.217466 kubelet[2650]: I0813 07:18:09.217272 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vvm\" (UniqueName: \"kubernetes.io/projected/9d85811d-ab52-44d5-bf4e-1668ec1e85b0-kube-api-access-d2vvm\") pod \"whisker-75444cb96b-vgppg\" (UID: \"9d85811d-ab52-44d5-bf4e-1668ec1e85b0\") " pod="calico-system/whisker-75444cb96b-vgppg" Aug 13 07:18:09.488268 containerd[1574]: time="2025-08-13T07:18:09.488129884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75444cb96b-vgppg,Uid:9d85811d-ab52-44d5-bf4e-1668ec1e85b0,Namespace:calico-system,Attempt:0,}" Aug 13 07:18:09.612454 systemd-networkd[1237]: cali09aa45fce70: Link UP Aug 13 07:18:09.612714 systemd-networkd[1237]: cali09aa45fce70: Gained carrier Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.532 [INFO][4055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.542 [INFO][4055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--75444cb96b--vgppg-eth0 whisker-75444cb96b- calico-system 9d85811d-ab52-44d5-bf4e-1668ec1e85b0 922 0 2025-08-13 07:18:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75444cb96b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-75444cb96b-vgppg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali09aa45fce70 [] [] }} ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.542 [INFO][4055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-eth0" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.572 [INFO][4070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" HandleID="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Workload="localhost-k8s-whisker--75444cb96b--vgppg-eth0" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.572 [INFO][4070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" HandleID="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Workload="localhost-k8s-whisker--75444cb96b--vgppg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000427dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-75444cb96b-vgppg", "timestamp":"2025-08-13 07:18:09.572108601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.572 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.572 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.572 [INFO][4070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.578 [INFO][4070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.584 [INFO][4070] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.588 [INFO][4070] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.589 [INFO][4070] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.591 [INFO][4070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.591 [INFO][4070] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.592 [INFO][4070] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.598 [INFO][4070] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.602 [INFO][4070] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.602 [INFO][4070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" host="localhost" Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.602 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:09.628687 containerd[1574]: 2025-08-13 07:18:09.602 [INFO][4070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" HandleID="k8s-pod-network.e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Workload="localhost-k8s-whisker--75444cb96b--vgppg-eth0" Aug 13 07:18:09.629490 containerd[1574]: 2025-08-13 07:18:09.605 [INFO][4055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75444cb96b--vgppg-eth0", GenerateName:"whisker-75444cb96b-", Namespace:"calico-system", SelfLink:"", UID:"9d85811d-ab52-44d5-bf4e-1668ec1e85b0", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75444cb96b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-75444cb96b-vgppg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09aa45fce70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:09.629490 containerd[1574]: 2025-08-13 07:18:09.605 [INFO][4055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-eth0" Aug 13 07:18:09.629490 containerd[1574]: 2025-08-13 07:18:09.605 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09aa45fce70 ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-eth0" Aug 13 07:18:09.629490 containerd[1574]: 2025-08-13 07:18:09.612 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-eth0" Aug 13 07:18:09.629490 containerd[1574]: 2025-08-13 07:18:09.613 [INFO][4055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--75444cb96b--vgppg-eth0", GenerateName:"whisker-75444cb96b-", Namespace:"calico-system", SelfLink:"", UID:"9d85811d-ab52-44d5-bf4e-1668ec1e85b0", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75444cb96b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd", Pod:"whisker-75444cb96b-vgppg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09aa45fce70", MAC:"66:9b:37:d5:fe:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:09.629490 containerd[1574]: 2025-08-13 07:18:09.624 [INFO][4055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd" Namespace="calico-system" Pod="whisker-75444cb96b-vgppg" WorkloadEndpoint="localhost-k8s-whisker--75444cb96b--vgppg-eth0" Aug 13 07:18:09.656287 containerd[1574]: time="2025-08-13T07:18:09.656177388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:09.656287 containerd[1574]: time="2025-08-13T07:18:09.656243009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:09.656287 containerd[1574]: time="2025-08-13T07:18:09.656258590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:09.657156 containerd[1574]: time="2025-08-13T07:18:09.657109097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:09.680620 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:09.735860 containerd[1574]: time="2025-08-13T07:18:09.735716587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75444cb96b-vgppg,Uid:9d85811d-ab52-44d5-bf4e-1668ec1e85b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd\"" Aug 13 07:18:09.742350 containerd[1574]: time="2025-08-13T07:18:09.740728302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:18:09.931914 kernel: bpftool[4252]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:18:10.180357 systemd-networkd[1237]: vxlan.calico: Link UP Aug 13 07:18:10.180374 systemd-networkd[1237]: vxlan.calico: Gained carrier Aug 13 07:18:10.626612 kubelet[2650]: I0813 07:18:10.626567 2650 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d947c1-874c-414a-a146-eedc813ee768" path="/var/lib/kubelet/pods/94d947c1-874c-414a-a146-eedc813ee768/volumes" Aug 13 07:18:11.036694 containerd[1574]: time="2025-08-13T07:18:11.036564145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.037624 containerd[1574]: time="2025-08-13T07:18:11.037580088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:18:11.038785 containerd[1574]: time="2025-08-13T07:18:11.038738709Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.041139 containerd[1574]: time="2025-08-13T07:18:11.041106108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.041948 containerd[1574]: time="2025-08-13T07:18:11.041908214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.301132086s" Aug 13 07:18:11.041988 containerd[1574]: time="2025-08-13T07:18:11.041959216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:18:11.043859 containerd[1574]: time="2025-08-13T07:18:11.043831174Z" level=info msg="CreateContainer within sandbox \"e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:18:11.044862 systemd-networkd[1237]: cali09aa45fce70: Gained IPv6LL Aug 13 07:18:11.056891 containerd[1574]: time="2025-08-13T07:18:11.056823628Z" level=info msg="CreateContainer within sandbox \"e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"215f0ad4282d684f3e5e32efd7f456157ff28f8062e94ade795d7dd32d2f7b59\"" Aug 13 07:18:11.057541 containerd[1574]: time="2025-08-13T07:18:11.057441845Z" level=info msg="StartContainer for \"215f0ad4282d684f3e5e32efd7f456157ff28f8062e94ade795d7dd32d2f7b59\"" Aug 13 07:18:11.125917 containerd[1574]: time="2025-08-13T07:18:11.125795530Z" level=info msg="StartContainer for \"215f0ad4282d684f3e5e32efd7f456157ff28f8062e94ade795d7dd32d2f7b59\" returns successfully" Aug 13 07:18:11.127772 containerd[1574]: time="2025-08-13T07:18:11.127426335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:18:11.623777 containerd[1574]: time="2025-08-13T07:18:11.623723322Z" level=info msg="StopPodSandbox for \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\"" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.719 [INFO][4381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.719 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" iface="eth0" netns="/var/run/netns/cni-2ea12424-3572-c4af-4cdf-01115fa1def9" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.720 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" iface="eth0" netns="/var/run/netns/cni-2ea12424-3572-c4af-4cdf-01115fa1def9" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.720 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" iface="eth0" netns="/var/run/netns/cni-2ea12424-3572-c4af-4cdf-01115fa1def9" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.720 [INFO][4381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.720 [INFO][4381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.748 [INFO][4389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.748 [INFO][4389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.748 [INFO][4389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.754 [WARNING][4389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.754 [INFO][4389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.756 [INFO][4389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:11.762728 containerd[1574]: 2025-08-13 07:18:11.759 [INFO][4381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:11.764062 containerd[1574]: time="2025-08-13T07:18:11.764017033Z" level=info msg="TearDown network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\" successfully" Aug 13 07:18:11.764125 containerd[1574]: time="2025-08-13T07:18:11.764056141Z" level=info msg="StopPodSandbox for \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\" returns successfully" Aug 13 07:18:11.764718 kubelet[2650]: E0813 07:18:11.764460 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:11.765167 containerd[1574]: time="2025-08-13T07:18:11.764973488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rttzq,Uid:f9ce3415-4567-4b4f-85b1-ab7682c65560,Namespace:kube-system,Attempt:1,}" Aug 13 07:18:11.766397 systemd[1]: run-netns-cni\x2d2ea12424\x2d3572\x2dc4af\x2d4cdf\x2d01115fa1def9.mount: Deactivated successfully. Aug 13 07:18:11.872659 systemd-networkd[1237]: cali1405a2f1a6f: Link UP Aug 13 07:18:11.873722 systemd-networkd[1237]: cali1405a2f1a6f: Gained carrier Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.810 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0 coredns-7c65d6cfc9- kube-system f9ce3415-4567-4b4f-85b1-ab7682c65560 937 0 2025-08-13 07:17:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-rttzq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1405a2f1a6f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.810 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.833 [INFO][4414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" HandleID="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.833 [INFO][4414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" HandleID="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-rttzq", "timestamp":"2025-08-13 07:18:11.833383906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.833 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.833 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.833 [INFO][4414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.839 [INFO][4414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.845 [INFO][4414] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.849 [INFO][4414] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.850 [INFO][4414] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.854 [INFO][4414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.854 [INFO][4414] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.856 [INFO][4414] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648 Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.860 [INFO][4414] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.866 [INFO][4414] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.866 [INFO][4414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" host="localhost" Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.866 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:11.889566 containerd[1574]: 2025-08-13 07:18:11.866 [INFO][4414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" HandleID="k8s-pod-network.05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.890391 containerd[1574]: 2025-08-13 07:18:11.870 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f9ce3415-4567-4b4f-85b1-ab7682c65560", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-rttzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1405a2f1a6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:11.890391 containerd[1574]: 2025-08-13 07:18:11.870 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.890391 containerd[1574]: 2025-08-13 07:18:11.870 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1405a2f1a6f ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.890391 containerd[1574]: 2025-08-13 07:18:11.875 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.890391 containerd[1574]: 2025-08-13 07:18:11.875 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f9ce3415-4567-4b4f-85b1-ab7682c65560", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648", Pod:"coredns-7c65d6cfc9-rttzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1405a2f1a6f", MAC:"fe:07:71:41:77:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:11.890391 containerd[1574]: 2025-08-13 07:18:11.884 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rttzq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:11.911156 containerd[1574]: time="2025-08-13T07:18:11.910999476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:11.911156 containerd[1574]: time="2025-08-13T07:18:11.911081680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:11.911156 containerd[1574]: time="2025-08-13T07:18:11.911097872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:11.911742 containerd[1574]: time="2025-08-13T07:18:11.911206410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:11.957381 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:11.986604 containerd[1574]: time="2025-08-13T07:18:11.986557415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rttzq,Uid:f9ce3415-4567-4b4f-85b1-ab7682c65560,Namespace:kube-system,Attempt:1,} returns sandbox id \"05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648\"" Aug 13 07:18:11.987420 kubelet[2650]: E0813 07:18:11.987393 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:11.990105 containerd[1574]: time="2025-08-13T07:18:11.990076150Z" level=info msg="CreateContainer within sandbox \"05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:18:12.007587 containerd[1574]: time="2025-08-13T07:18:12.007531817Z" level=info msg="CreateContainer within sandbox \"05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"962588fab0e852ada7e51f6198862eb013d51503a2405adf377ec682cdedd566\"" Aug 13 07:18:12.008065 containerd[1574]: time="2025-08-13T07:18:12.008034022Z" level=info msg="StartContainer for \"962588fab0e852ada7e51f6198862eb013d51503a2405adf377ec682cdedd566\"" Aug 13 07:18:12.066754 containerd[1574]: time="2025-08-13T07:18:12.066707258Z" level=info msg="StartContainer for \"962588fab0e852ada7e51f6198862eb013d51503a2405adf377ec682cdedd566\" returns successfully" Aug 13 07:18:12.067027 systemd-networkd[1237]: vxlan.calico: Gained IPv6LL Aug 13 07:18:12.822304 kubelet[2650]: E0813 07:18:12.822037 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:12.842960 kubelet[2650]: I0813 07:18:12.841778 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rttzq" podStartSLOduration=43.84176176 podStartE2EDuration="43.84176176s" podCreationTimestamp="2025-08-13 07:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:12.841165647 +0000 UTC m=+48.366234766" watchObservedRunningTime="2025-08-13 07:18:12.84176176 +0000 UTC m=+48.366830879" Aug 13 07:18:13.106213 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:49358.service - OpenSSH per-connection server daemon (10.0.0.1:49358). Aug 13 07:18:13.624552 containerd[1574]: time="2025-08-13T07:18:13.624424686Z" level=info msg="StopPodSandbox for \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\"" Aug 13 07:18:13.625228 containerd[1574]: time="2025-08-13T07:18:13.624565839Z" level=info msg="StopPodSandbox for \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\"" Aug 13 07:18:13.789654 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 49358 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:13.793824 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:13.815670 systemd-logind[1556]: New session 8 of user core. Aug 13 07:18:13.824238 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:18:13.825327 kubelet[2650]: E0813 07:18:13.825220 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.789 [INFO][4542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.793 [INFO][4542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" iface="eth0" netns="/var/run/netns/cni-dd46b914-4a0c-6bc2-745a-ba22d77cfeeb" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.794 [INFO][4542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" iface="eth0" netns="/var/run/netns/cni-dd46b914-4a0c-6bc2-745a-ba22d77cfeeb" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.794 [INFO][4542] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" iface="eth0" netns="/var/run/netns/cni-dd46b914-4a0c-6bc2-745a-ba22d77cfeeb" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.794 [INFO][4542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.794 [INFO][4542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.837 [INFO][4564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.837 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.837 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.843 [WARNING][4564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.843 [INFO][4564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.845 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:13.854240 containerd[1574]: 2025-08-13 07:18:13.847 [INFO][4542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:13.857303 containerd[1574]: time="2025-08-13T07:18:13.854660643Z" level=info msg="TearDown network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\" successfully" Aug 13 07:18:13.857303 containerd[1574]: time="2025-08-13T07:18:13.854689671Z" level=info msg="StopPodSandbox for \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\" returns successfully" Aug 13 07:18:13.857303 containerd[1574]: time="2025-08-13T07:18:13.856205731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mlvdk,Uid:af09357e-c282-465e-8e3e-c2975907b447,Namespace:kube-system,Attempt:1,}" Aug 13 07:18:13.857392 kubelet[2650]: E0813 07:18:13.854960 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:13.859445 systemd[1]: run-netns-cni\x2ddd46b914\x2d4a0c\x2d6bc2\x2d745a\x2dba22d77cfeeb.mount: Deactivated successfully. Aug 13 07:18:13.860195 systemd-networkd[1237]: cali1405a2f1a6f: Gained IPv6LL Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.792 [INFO][4543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.792 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" iface="eth0" netns="/var/run/netns/cni-b505b5ea-afdb-8269-bb08-4890b8a7b86d" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.793 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" iface="eth0" netns="/var/run/netns/cni-b505b5ea-afdb-8269-bb08-4890b8a7b86d" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.793 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" iface="eth0" netns="/var/run/netns/cni-b505b5ea-afdb-8269-bb08-4890b8a7b86d" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.793 [INFO][4543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.793 [INFO][4543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.838 [INFO][4562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.838 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.845 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.849 [WARNING][4562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.850 [INFO][4562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.853 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:13.862610 containerd[1574]: 2025-08-13 07:18:13.857 [INFO][4543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:13.864655 containerd[1574]: time="2025-08-13T07:18:13.864608655Z" level=info msg="TearDown network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\" successfully" Aug 13 07:18:13.864655 containerd[1574]: time="2025-08-13T07:18:13.864650889Z" level=info msg="StopPodSandbox for \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\" returns successfully" Aug 13 07:18:13.866068 containerd[1574]: time="2025-08-13T07:18:13.866036669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-69nqm,Uid:230a398e-7dc1-4ab4-8443-e6fef0e021f2,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:18:13.866843 systemd[1]: run-netns-cni\x2db505b5ea\x2dafdb\x2d8269\x2dbb08\x2d4890b8a7b86d.mount: Deactivated successfully. Aug 13 07:18:14.015099 sshd[4519]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:14.021991 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:18:14.022665 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:49358.service: Deactivated successfully. Aug 13 07:18:14.026940 systemd-networkd[1237]: calie73c7d9beac: Link UP Aug 13 07:18:14.027945 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:18:14.029166 systemd-networkd[1237]: calie73c7d9beac: Gained carrier Aug 13 07:18:14.029585 systemd-logind[1556]: Removed session 8. Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.920 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0 coredns-7c65d6cfc9- kube-system af09357e-c282-465e-8e3e-c2975907b447 991 0 2025-08-13 07:17:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mlvdk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie73c7d9beac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.920 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.984 [INFO][4615] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" HandleID="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.984 [INFO][4615] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" HandleID="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mlvdk", "timestamp":"2025-08-13 07:18:13.984159223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.984 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.985 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.985 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.991 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:13.997 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.002 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.004 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.006 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.006 [INFO][4615] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.007 [INFO][4615] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7 Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.011 [INFO][4615] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.017 [INFO][4615] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.017 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" host="localhost" Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.017 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:14.046208 containerd[1574]: 2025-08-13 07:18:14.017 [INFO][4615] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" HandleID="k8s-pod-network.24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:14.047186 containerd[1574]: 2025-08-13 07:18:14.021 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af09357e-c282-465e-8e3e-c2975907b447", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mlvdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie73c7d9beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:14.047186 containerd[1574]: 2025-08-13 07:18:14.021 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:14.047186 containerd[1574]: 2025-08-13 07:18:14.021 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie73c7d9beac ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:14.047186 containerd[1574]: 2025-08-13 07:18:14.029 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:14.047186 containerd[1574]: 2025-08-13 07:18:14.032 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af09357e-c282-465e-8e3e-c2975907b447", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7", Pod:"coredns-7c65d6cfc9-mlvdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie73c7d9beac", MAC:"16:e5:1c:4d:17:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:14.047186 containerd[1574]: 2025-08-13 07:18:14.043 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mlvdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:14.068240 containerd[1574]: time="2025-08-13T07:18:14.068150393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:14.068240 containerd[1574]: time="2025-08-13T07:18:14.068221664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:14.068240 containerd[1574]: time="2025-08-13T07:18:14.068236113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:14.068826 containerd[1574]: time="2025-08-13T07:18:14.068343287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:14.098275 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:14.133411 containerd[1574]: time="2025-08-13T07:18:14.133334557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mlvdk,Uid:af09357e-c282-465e-8e3e-c2975907b447,Namespace:kube-system,Attempt:1,} returns sandbox id \"24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7\"" Aug 13 07:18:14.134992 kubelet[2650]: E0813 07:18:14.134844 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:14.138814 systemd-networkd[1237]: cali7c18407cbdc: Link UP Aug 13 07:18:14.139068 systemd-networkd[1237]: cali7c18407cbdc: Gained carrier Aug 13 07:18:14.140519 containerd[1574]: time="2025-08-13T07:18:14.140292361Z" level=info msg="CreateContainer within sandbox \"24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:13.949 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0 calico-apiserver-6dc6dbff5- calico-apiserver 230a398e-7dc1-4ab4-8443-e6fef0e021f2 992 0 2025-08-13 07:17:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc6dbff5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dc6dbff5-69nqm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c18407cbdc [] [] }} ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:13.951 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.008 [INFO][4626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" HandleID="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.009 [INFO][4626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" HandleID="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a7f00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dc6dbff5-69nqm", "timestamp":"2025-08-13 07:18:14.00653295 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.009 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.017 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.018 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.093 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.101 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.107 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.109 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.111 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.111 [INFO][4626] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.113 [INFO][4626] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4 Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.118 [INFO][4626] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.127 [INFO][4626] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.127 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" host="localhost" Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.127 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:14.157424 containerd[1574]: 2025-08-13 07:18:14.127 [INFO][4626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" HandleID="k8s-pod-network.afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:14.158506 containerd[1574]: 2025-08-13 07:18:14.133 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"230a398e-7dc1-4ab4-8443-e6fef0e021f2", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dc6dbff5-69nqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c18407cbdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:14.158506 containerd[1574]: 2025-08-13 07:18:14.133 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:14.158506 containerd[1574]: 2025-08-13 07:18:14.134 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c18407cbdc ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:14.158506 containerd[1574]: 2025-08-13 07:18:14.138 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:14.158506 containerd[1574]: 2025-08-13 07:18:14.139 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"230a398e-7dc1-4ab4-8443-e6fef0e021f2", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4", Pod:"calico-apiserver-6dc6dbff5-69nqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c18407cbdc", MAC:"4a:00:9e:a1:30:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:14.158506 containerd[1574]: 2025-08-13 07:18:14.153 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-69nqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:14.162674 containerd[1574]: time="2025-08-13T07:18:14.162636093Z" level=info msg="CreateContainer within sandbox \"24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eede9caf40e2d0746a014c89878605c440938735939a6ffa70875a07b492712b\"" Aug 13 07:18:14.163301 containerd[1574]: time="2025-08-13T07:18:14.163217093Z" level=info msg="StartContainer for \"eede9caf40e2d0746a014c89878605c440938735939a6ffa70875a07b492712b\"" Aug 13 07:18:14.188273 containerd[1574]: time="2025-08-13T07:18:14.188157670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:14.188418 containerd[1574]: time="2025-08-13T07:18:14.188244504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:14.188661 containerd[1574]: time="2025-08-13T07:18:14.188599472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:14.189187 containerd[1574]: time="2025-08-13T07:18:14.188808841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:14.225577 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:14.396688 containerd[1574]: time="2025-08-13T07:18:14.396629364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-69nqm,Uid:230a398e-7dc1-4ab4-8443-e6fef0e021f2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4\"" Aug 13 07:18:14.397490 containerd[1574]: time="2025-08-13T07:18:14.396630116Z" level=info msg="StartContainer for \"eede9caf40e2d0746a014c89878605c440938735939a6ffa70875a07b492712b\" returns successfully" Aug 13 07:18:14.624998 containerd[1574]: time="2025-08-13T07:18:14.624887222Z" level=info msg="StopPodSandbox for \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\"" Aug 13 07:18:14.625470 containerd[1574]: time="2025-08-13T07:18:14.625073875Z" level=info msg="StopPodSandbox for \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\"" Aug 13 07:18:14.626073 containerd[1574]: time="2025-08-13T07:18:14.626018051Z" level=info msg="StopPodSandbox for \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\"" Aug 13 07:18:14.790205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575653592.mount: Deactivated successfully. Aug 13 07:18:14.830057 kubelet[2650]: E0813 07:18:14.830022 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:14.830561 kubelet[2650]: E0813 07:18:14.830090 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:14.928479 containerd[1574]: time="2025-08-13T07:18:14.927666099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:14.928650 containerd[1574]: time="2025-08-13T07:18:14.928581357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:18:14.931544 containerd[1574]: time="2025-08-13T07:18:14.931506349Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.909 [INFO][4805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.909 [INFO][4805] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" iface="eth0" netns="/var/run/netns/cni-3edc2c59-f220-a6fa-7e44-cd55941f38a2" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.909 [INFO][4805] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" iface="eth0" netns="/var/run/netns/cni-3edc2c59-f220-a6fa-7e44-cd55941f38a2" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.910 [INFO][4805] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" iface="eth0" netns="/var/run/netns/cni-3edc2c59-f220-a6fa-7e44-cd55941f38a2" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.912 [INFO][4805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.913 [INFO][4805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.953 [INFO][4843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.953 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.953 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.958 [WARNING][4843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.958 [INFO][4843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.959 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:14.967828 containerd[1574]: 2025-08-13 07:18:14.962 [INFO][4805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:14.967828 containerd[1574]: time="2025-08-13T07:18:14.966812125Z" level=info msg="TearDown network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\" successfully" Aug 13 07:18:14.967828 containerd[1574]: time="2025-08-13T07:18:14.966841404Z" level=info msg="StopPodSandbox for \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\" returns successfully" Aug 13 07:18:14.967828 containerd[1574]: time="2025-08-13T07:18:14.967799658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb7cfb49-tpxsq,Uid:5c443559-5d69-4d9a-86e1-d2701af11811,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:14.971363 systemd[1]: run-netns-cni\x2d3edc2c59\x2df220\x2da6fa\x2d7e44\x2dcd55941f38a2.mount: Deactivated successfully. Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.904 [INFO][4814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.904 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" iface="eth0" netns="/var/run/netns/cni-c0baf646-5e6d-951e-3bbb-746cf92c3668" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.904 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" iface="eth0" netns="/var/run/netns/cni-c0baf646-5e6d-951e-3bbb-746cf92c3668" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.905 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" iface="eth0" netns="/var/run/netns/cni-c0baf646-5e6d-951e-3bbb-746cf92c3668" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.905 [INFO][4814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.905 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.963 [INFO][4835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.963 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.963 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.972 [WARNING][4835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.972 [INFO][4835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.973 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:14.981532 containerd[1574]: 2025-08-13 07:18:14.976 [INFO][4814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:14.981532 containerd[1574]: time="2025-08-13T07:18:14.980478277Z" level=info msg="TearDown network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\" successfully" Aug 13 07:18:14.981532 containerd[1574]: time="2025-08-13T07:18:14.980515402Z" level=info msg="StopPodSandbox for \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\" returns successfully" Aug 13 07:18:14.981532 containerd[1574]: time="2025-08-13T07:18:14.981178035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-vgmmt,Uid:99eb19aa-3962-4a4e-90dc-6113f9d3975a,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:18:14.982374 systemd[1]: run-netns-cni\x2dc0baf646\x2d5e6d\x2d951e\x2d3bbb\x2d746cf92c3668.mount: Deactivated successfully. Aug 13 07:18:15.137838 containerd[1574]: time="2025-08-13T07:18:15.137770479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:15.138666 containerd[1574]: time="2025-08-13T07:18:15.138622990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.011152125s" Aug 13 07:18:15.138723 containerd[1574]: time="2025-08-13T07:18:15.138671617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:18:15.139771 containerd[1574]: time="2025-08-13T07:18:15.139738907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:18:15.141148 containerd[1574]: time="2025-08-13T07:18:15.141107307Z" level=info msg="CreateContainer within sandbox \"e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:18:15.267091 systemd-networkd[1237]: calie73c7d9beac: Gained IPv6LL Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:14.915 [INFO][4809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:14.915 [INFO][4809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" iface="eth0" netns="/var/run/netns/cni-035c123f-7c6e-1322-f733-84f07313f64a" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:14.915 [INFO][4809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" iface="eth0" netns="/var/run/netns/cni-035c123f-7c6e-1322-f733-84f07313f64a" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:14.915 [INFO][4809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" iface="eth0" netns="/var/run/netns/cni-035c123f-7c6e-1322-f733-84f07313f64a" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:14.915 [INFO][4809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:14.915 [INFO][4809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:15.022 [INFO][4845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:15.022 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:15.022 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:15.308 [WARNING][4845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:15.308 [INFO][4845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:15.405 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:15.413562 containerd[1574]: 2025-08-13 07:18:15.409 [INFO][4809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:15.414416 containerd[1574]: time="2025-08-13T07:18:15.414290722Z" level=info msg="TearDown network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\" successfully" Aug 13 07:18:15.414416 containerd[1574]: time="2025-08-13T07:18:15.414321794Z" level=info msg="StopPodSandbox for \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\" returns successfully" Aug 13 07:18:15.416751 systemd[1]: run-netns-cni\x2d035c123f\x2d7c6e\x2d1322\x2df733\x2d84f07313f64a.mount: Deactivated successfully. Aug 13 07:18:15.417211 containerd[1574]: time="2025-08-13T07:18:15.417163555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b99jc,Uid:37049b96-5b1d-4b14-aa39-fa916253ae4c,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:15.459274 systemd-networkd[1237]: cali7c18407cbdc: Gained IPv6LL Aug 13 07:18:15.468565 containerd[1574]: time="2025-08-13T07:18:15.467667708Z" level=info msg="CreateContainer within sandbox \"e2ee631b88235bf2b836743f3986758d1147f6ad6e6ed13229912463737cddfd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"beda87c3ed8bd997eb9e538c9dd952cd0c316e76b7ed33ce7859be3fa1646a69\"" Aug 13 07:18:15.478128 containerd[1574]: time="2025-08-13T07:18:15.477532082Z" level=info msg="StartContainer for \"beda87c3ed8bd997eb9e538c9dd952cd0c316e76b7ed33ce7859be3fa1646a69\"" Aug 13 07:18:15.585118 containerd[1574]: time="2025-08-13T07:18:15.585058524Z" level=info msg="StartContainer for \"beda87c3ed8bd997eb9e538c9dd952cd0c316e76b7ed33ce7859be3fa1646a69\" returns successfully" Aug 13 07:18:15.624580 systemd-networkd[1237]: calib3e4a21b81e: Link UP Aug 13 07:18:15.625465 containerd[1574]: time="2025-08-13T07:18:15.625416309Z" level=info msg="StopPodSandbox for \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\"" Aug 13 07:18:15.627038 systemd-networkd[1237]: calib3e4a21b81e: Gained carrier Aug 13 07:18:15.636488 kubelet[2650]: I0813 07:18:15.636423 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mlvdk" podStartSLOduration=46.636384146 podStartE2EDuration="46.636384146s" podCreationTimestamp="2025-08-13 07:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:14.914382221 +0000 UTC m=+50.439451340" watchObservedRunningTime="2025-08-13 07:18:15.636384146 +0000 UTC m=+51.161453265" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.547 [INFO][4868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0 calico-kube-controllers-68bb7cfb49- calico-system 5c443559-5d69-4d9a-86e1-d2701af11811 1016 0 2025-08-13 07:17:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68bb7cfb49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68bb7cfb49-tpxsq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib3e4a21b81e [] [] }} ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.547 [INFO][4868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.582 [INFO][4949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" HandleID="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.583 [INFO][4949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" HandleID="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68bb7cfb49-tpxsq", "timestamp":"2025-08-13 07:18:15.582883864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.583 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.584 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.585 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.592 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.597 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.601 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.602 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.604 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.604 [INFO][4949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.607 [INFO][4949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857 Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.612 [INFO][4949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.617 [INFO][4949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.617 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" host="localhost" Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.617 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:15.640036 containerd[1574]: 2025-08-13 07:18:15.617 [INFO][4949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" HandleID="k8s-pod-network.3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:15.641233 containerd[1574]: 2025-08-13 07:18:15.620 [INFO][4868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0", GenerateName:"calico-kube-controllers-68bb7cfb49-", Namespace:"calico-system", SelfLink:"", UID:"5c443559-5d69-4d9a-86e1-d2701af11811", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb7cfb49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68bb7cfb49-tpxsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3e4a21b81e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:15.641233 containerd[1574]: 2025-08-13 07:18:15.620 [INFO][4868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:15.641233 containerd[1574]: 2025-08-13 07:18:15.620 [INFO][4868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3e4a21b81e ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:15.641233 containerd[1574]: 2025-08-13 07:18:15.627 [INFO][4868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:15.641233 containerd[1574]: 2025-08-13 07:18:15.628 [INFO][4868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0", GenerateName:"calico-kube-controllers-68bb7cfb49-", Namespace:"calico-system", SelfLink:"", UID:"5c443559-5d69-4d9a-86e1-d2701af11811", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb7cfb49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857", Pod:"calico-kube-controllers-68bb7cfb49-tpxsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3e4a21b81e", MAC:"5e:ae:bf:33:2e:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:15.641233 containerd[1574]: 2025-08-13 07:18:15.637 [INFO][4868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857" Namespace="calico-system" Pod="calico-kube-controllers-68bb7cfb49-tpxsq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:15.663118 containerd[1574]: time="2025-08-13T07:18:15.662026966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:15.663118 containerd[1574]: time="2025-08-13T07:18:15.662087788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:15.663118 containerd[1574]: time="2025-08-13T07:18:15.662101475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:15.663118 containerd[1574]: time="2025-08-13T07:18:15.662337016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:15.697703 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:15.737950 systemd-networkd[1237]: cali8717189fbbc: Link UP Aug 13 07:18:15.741811 systemd-networkd[1237]: cali8717189fbbc: Gained carrier Aug 13 07:18:15.746854 containerd[1574]: time="2025-08-13T07:18:15.746805063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb7cfb49-tpxsq,Uid:5c443559-5d69-4d9a-86e1-d2701af11811,Namespace:calico-system,Attempt:1,} returns sandbox id \"3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857\"" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.530 [INFO][4877] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b99jc-eth0 csi-node-driver- calico-system 37049b96-5b1d-4b14-aa39-fa916253ae4c 1015 0 2025-08-13 07:17:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b99jc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8717189fbbc [] [] }} ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.530 [INFO][4877] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.585 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" HandleID="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.585 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" HandleID="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001195d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b99jc", "timestamp":"2025-08-13 07:18:15.585268783 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.585 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.617 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.617 [INFO][4940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.694 [INFO][4940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.700 [INFO][4940] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.705 [INFO][4940] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.707 [INFO][4940] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.709 [INFO][4940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.709 [INFO][4940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.711 [INFO][4940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.714 [INFO][4940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.720 [INFO][4940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.720 [INFO][4940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" host="localhost" Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.720 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:15.755958 containerd[1574]: 2025-08-13 07:18:15.720 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" HandleID="k8s-pod-network.ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.758777 containerd[1574]: 2025-08-13 07:18:15.724 [INFO][4877] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b99jc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37049b96-5b1d-4b14-aa39-fa916253ae4c", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b99jc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8717189fbbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:15.758777 containerd[1574]: 2025-08-13 07:18:15.724 [INFO][4877] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.758777 containerd[1574]: 2025-08-13 07:18:15.724 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8717189fbbc ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.758777 containerd[1574]: 2025-08-13 07:18:15.740 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.758777 containerd[1574]: 2025-08-13 07:18:15.740 [INFO][4877] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b99jc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37049b96-5b1d-4b14-aa39-fa916253ae4c", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e", Pod:"csi-node-driver-b99jc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8717189fbbc", MAC:"02:db:99:0f:e2:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:15.758777 containerd[1574]: 2025-08-13 07:18:15.751 [INFO][4877] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e" Namespace="calico-system" Pod="csi-node-driver-b99jc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:15.791355 containerd[1574]: time="2025-08-13T07:18:15.790980758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:15.791355 containerd[1574]: time="2025-08-13T07:18:15.791041118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:15.791355 containerd[1574]: time="2025-08-13T07:18:15.791052822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:15.791355 containerd[1574]: time="2025-08-13T07:18:15.791153913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:15.840400 kubelet[2650]: E0813 07:18:15.840196 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:15.867654 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:15.870902 systemd-networkd[1237]: cali084a3bef23c: Link UP Aug 13 07:18:15.872993 systemd-networkd[1237]: cali084a3bef23c: Gained carrier Aug 13 07:18:15.881750 kubelet[2650]: I0813 07:18:15.876462 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-75444cb96b-vgppg" podStartSLOduration=1.477117822 podStartE2EDuration="6.87644315s" podCreationTimestamp="2025-08-13 07:18:09 +0000 UTC" firstStartedPulling="2025-08-13 07:18:09.740245002 +0000 UTC m=+45.265314122" lastFinishedPulling="2025-08-13 07:18:15.139570331 +0000 UTC m=+50.664639450" observedRunningTime="2025-08-13 07:18:15.860055738 +0000 UTC m=+51.385124868" watchObservedRunningTime="2025-08-13 07:18:15.87644315 +0000 UTC m=+51.401512269" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.677 [INFO][4992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.678 [INFO][4992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" iface="eth0" netns="/var/run/netns/cni-e5d5c6b5-46f5-e3e6-bc49-729fa2e9ab71" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.678 [INFO][4992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" iface="eth0" netns="/var/run/netns/cni-e5d5c6b5-46f5-e3e6-bc49-729fa2e9ab71" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.678 [INFO][4992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" iface="eth0" netns="/var/run/netns/cni-e5d5c6b5-46f5-e3e6-bc49-729fa2e9ab71" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.678 [INFO][4992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.678 [INFO][4992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.702 [INFO][5036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.702 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.851 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.863 [WARNING][5036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.863 [INFO][5036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.866 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:15.885602 containerd[1574]: 2025-08-13 07:18:15.881 [INFO][4992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:15.889892 containerd[1574]: time="2025-08-13T07:18:15.886261522Z" level=info msg="TearDown network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\" successfully" Aug 13 07:18:15.889892 containerd[1574]: time="2025-08-13T07:18:15.886338767Z" level=info msg="StopPodSandbox for \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\" returns successfully" Aug 13 07:18:15.892047 systemd[1]: run-netns-cni\x2de5d5c6b5\x2d46f5\x2de3e6\x2dbc49\x2d729fa2e9ab71.mount: Deactivated successfully. Aug 13 07:18:15.895143 containerd[1574]: time="2025-08-13T07:18:15.894192910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-l9xx9,Uid:a60817e0-e119-4674-adab-2cc042d34e82,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:15.898834 containerd[1574]: time="2025-08-13T07:18:15.898780835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b99jc,Uid:37049b96-5b1d-4b14-aa39-fa916253ae4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e\"" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.548 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0 calico-apiserver-6dc6dbff5- calico-apiserver 99eb19aa-3962-4a4e-90dc-6113f9d3975a 1014 0 2025-08-13 07:17:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc6dbff5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dc6dbff5-vgmmt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali084a3bef23c [] [] }} ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.549 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.595 [INFO][4954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" HandleID="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.595 [INFO][4954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" HandleID="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000190670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dc6dbff5-vgmmt", "timestamp":"2025-08-13 07:18:15.595479399 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.595 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.721 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.723 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.795 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.807 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.819 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.822 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.825 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.826 [INFO][4954] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.829 [INFO][4954] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.838 [INFO][4954] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.849 [INFO][4954] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.849 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" host="localhost" Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.850 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:15.902141 containerd[1574]: 2025-08-13 07:18:15.850 [INFO][4954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" HandleID="k8s-pod-network.856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:15.902754 containerd[1574]: 2025-08-13 07:18:15.860 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"99eb19aa-3962-4a4e-90dc-6113f9d3975a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dc6dbff5-vgmmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali084a3bef23c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:15.902754 containerd[1574]: 2025-08-13 07:18:15.860 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:15.902754 containerd[1574]: 2025-08-13 07:18:15.860 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali084a3bef23c ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:15.902754 containerd[1574]: 2025-08-13 07:18:15.872 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:15.902754 containerd[1574]: 2025-08-13 07:18:15.877 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"99eb19aa-3962-4a4e-90dc-6113f9d3975a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce", Pod:"calico-apiserver-6dc6dbff5-vgmmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali084a3bef23c", MAC:"e2:e1:c6:6f:24:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:15.902754 containerd[1574]: 2025-08-13 07:18:15.895 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce" Namespace="calico-apiserver" Pod="calico-apiserver-6dc6dbff5-vgmmt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:15.930181 containerd[1574]: time="2025-08-13T07:18:15.929833031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:15.930181 containerd[1574]: time="2025-08-13T07:18:15.929927569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:15.930181 containerd[1574]: time="2025-08-13T07:18:15.929942259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:15.930181 containerd[1574]: time="2025-08-13T07:18:15.930070145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:15.960162 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:15.988954 containerd[1574]: time="2025-08-13T07:18:15.988859591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc6dbff5-vgmmt,Uid:99eb19aa-3962-4a4e-90dc-6113f9d3975a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce\"" Aug 13 07:18:16.025212 systemd-networkd[1237]: cali01a8b362427: Link UP Aug 13 07:18:16.025854 systemd-networkd[1237]: cali01a8b362427: Gained carrier Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.950 [INFO][5124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0 goldmane-58fd7646b9- calico-system a60817e0-e119-4674-adab-2cc042d34e82 1033 0 2025-08-13 07:17:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-l9xx9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali01a8b362427 [] [] }} ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.950 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.987 [INFO][5170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" HandleID="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.988 [INFO][5170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" HandleID="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-l9xx9", "timestamp":"2025-08-13 07:18:15.98761185 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.988 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.989 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.989 [INFO][5170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:15.996 [INFO][5170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.001 [INFO][5170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.004 [INFO][5170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.006 [INFO][5170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.008 [INFO][5170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.008 [INFO][5170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.010 [INFO][5170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.013 [INFO][5170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.019 [INFO][5170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.019 [INFO][5170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" host="localhost" Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.019 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:16.039211 containerd[1574]: 2025-08-13 07:18:16.019 [INFO][5170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" HandleID="k8s-pod-network.ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:16.039807 containerd[1574]: 2025-08-13 07:18:16.022 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a60817e0-e119-4674-adab-2cc042d34e82", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-l9xx9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01a8b362427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:16.039807 containerd[1574]: 2025-08-13 07:18:16.023 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:16.039807 containerd[1574]: 2025-08-13 07:18:16.023 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01a8b362427 ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:16.039807 containerd[1574]: 2025-08-13 07:18:16.025 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:16.039807 containerd[1574]: 2025-08-13 07:18:16.026 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a60817e0-e119-4674-adab-2cc042d34e82", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a", Pod:"goldmane-58fd7646b9-l9xx9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01a8b362427", MAC:"7e:0d:14:54:50:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:16.039807 containerd[1574]: 2025-08-13 07:18:16.034 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a" Namespace="calico-system" Pod="goldmane-58fd7646b9-l9xx9" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:16.057249 containerd[1574]: time="2025-08-13T07:18:16.057133454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:16.057249 containerd[1574]: time="2025-08-13T07:18:16.057206549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:16.057249 containerd[1574]: time="2025-08-13T07:18:16.057217832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:16.057472 containerd[1574]: time="2025-08-13T07:18:16.057330206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:16.088793 systemd-resolved[1453]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:18:16.114509 containerd[1574]: time="2025-08-13T07:18:16.114463005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-l9xx9,Uid:a60817e0-e119-4674-adab-2cc042d34e82,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a\"" Aug 13 07:18:16.739127 systemd-networkd[1237]: calib3e4a21b81e: Gained IPv6LL Aug 13 07:18:16.846685 kubelet[2650]: E0813 07:18:16.846645 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:16.995831 systemd-networkd[1237]: cali084a3bef23c: Gained IPv6LL Aug 13 07:18:17.443116 systemd-networkd[1237]: cali01a8b362427: Gained IPv6LL Aug 13 07:18:17.508202 systemd-networkd[1237]: cali8717189fbbc: Gained IPv6LL Aug 13 07:18:17.848957 kubelet[2650]: E0813 07:18:17.848816 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:18.434161 containerd[1574]: time="2025-08-13T07:18:18.434103109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:18.435285 containerd[1574]: time="2025-08-13T07:18:18.435238930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:18:18.436760 containerd[1574]: time="2025-08-13T07:18:18.436709519Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:18.446294 containerd[1574]: time="2025-08-13T07:18:18.446262054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:18.446944 containerd[1574]: time="2025-08-13T07:18:18.446912208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.307105897s" Aug 13 07:18:18.446944 containerd[1574]: time="2025-08-13T07:18:18.446941918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:18:18.448166 containerd[1574]: time="2025-08-13T07:18:18.448128380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:18:18.449400 containerd[1574]: time="2025-08-13T07:18:18.449375875Z" level=info msg="CreateContainer within sandbox \"afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:18:18.467705 containerd[1574]: time="2025-08-13T07:18:18.466487403Z" level=info msg="CreateContainer within sandbox \"afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1cfff7bee0c368cb701b35feb47cdebfa9ef92c4a6451cf97cabe7c1967101b4\"" Aug 13 07:18:18.467705 containerd[1574]: time="2025-08-13T07:18:18.467207387Z" level=info msg="StartContainer for \"1cfff7bee0c368cb701b35feb47cdebfa9ef92c4a6451cf97cabe7c1967101b4\"" Aug 13 07:18:18.541995 containerd[1574]: time="2025-08-13T07:18:18.541944300Z" level=info msg="StartContainer for \"1cfff7bee0c368cb701b35feb47cdebfa9ef92c4a6451cf97cabe7c1967101b4\" returns successfully" Aug 13 07:18:18.927516 kubelet[2650]: I0813 07:18:18.926700 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dc6dbff5-69nqm" podStartSLOduration=34.878233284 podStartE2EDuration="38.926684174s" podCreationTimestamp="2025-08-13 07:17:40 +0000 UTC" firstStartedPulling="2025-08-13 07:18:14.399476731 +0000 UTC m=+49.924545850" lastFinishedPulling="2025-08-13 07:18:18.447927601 +0000 UTC m=+53.972996740" observedRunningTime="2025-08-13 07:18:18.926641489 +0000 UTC m=+54.451710618" watchObservedRunningTime="2025-08-13 07:18:18.926684174 +0000 UTC m=+54.451753293" Aug 13 07:18:19.025163 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:49368.service - OpenSSH per-connection server daemon (10.0.0.1:49368). Aug 13 07:18:19.081476 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 49368 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:19.083806 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:19.091112 systemd-logind[1556]: New session 9 of user core. Aug 13 07:18:19.098302 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:18:19.252428 sshd[5309]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:19.256489 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:49368.service: Deactivated successfully. Aug 13 07:18:19.261746 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:18:19.262207 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:18:19.263480 systemd-logind[1556]: Removed session 9. Aug 13 07:18:19.854741 kubelet[2650]: I0813 07:18:19.854691 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:21.308459 containerd[1574]: time="2025-08-13T07:18:21.308343745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:21.309678 containerd[1574]: time="2025-08-13T07:18:21.309627517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:18:21.311429 containerd[1574]: time="2025-08-13T07:18:21.311387096Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:21.314297 containerd[1574]: time="2025-08-13T07:18:21.314266411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:21.315054 containerd[1574]: time="2025-08-13T07:18:21.315017965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.866851639s" Aug 13 07:18:21.315054 containerd[1574]: time="2025-08-13T07:18:21.315050259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:18:21.316252 containerd[1574]: time="2025-08-13T07:18:21.316158483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:18:21.326656 containerd[1574]: time="2025-08-13T07:18:21.326607519Z" level=info msg="CreateContainer within sandbox \"3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:18:21.343490 containerd[1574]: time="2025-08-13T07:18:21.343428604Z" level=info msg="CreateContainer within sandbox \"3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b0842818f16402ee9a14babe3600b58e3a8bd0e93c40f830c1b395640cfde0e1\"" Aug 13 07:18:21.344391 containerd[1574]: time="2025-08-13T07:18:21.344195759Z" level=info msg="StartContainer for \"b0842818f16402ee9a14babe3600b58e3a8bd0e93c40f830c1b395640cfde0e1\"" Aug 13 07:18:21.418694 containerd[1574]: time="2025-08-13T07:18:21.418630217Z" level=info msg="StartContainer for \"b0842818f16402ee9a14babe3600b58e3a8bd0e93c40f830c1b395640cfde0e1\" returns successfully" Aug 13 07:18:22.014007 kubelet[2650]: I0813 07:18:22.013925 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68bb7cfb49-tpxsq" podStartSLOduration=32.446953431 podStartE2EDuration="38.013905499s" podCreationTimestamp="2025-08-13 07:17:44 +0000 UTC" firstStartedPulling="2025-08-13 07:18:15.749008191 +0000 UTC m=+51.274077310" lastFinishedPulling="2025-08-13 07:18:21.315960259 +0000 UTC m=+56.841029378" observedRunningTime="2025-08-13 07:18:21.968009755 +0000 UTC m=+57.493078894" watchObservedRunningTime="2025-08-13 07:18:22.013905499 +0000 UTC m=+57.538974618" Aug 13 07:18:24.262357 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:54322.service - OpenSSH per-connection server daemon (10.0.0.1:54322). Aug 13 07:18:24.310989 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 54322 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:24.313125 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:24.318439 systemd-logind[1556]: New session 10 of user core. Aug 13 07:18:24.325168 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:18:24.486451 containerd[1574]: time="2025-08-13T07:18:24.486382290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:24.487218 containerd[1574]: time="2025-08-13T07:18:24.487123634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:18:24.488478 containerd[1574]: time="2025-08-13T07:18:24.488448049Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:24.490844 containerd[1574]: time="2025-08-13T07:18:24.490810412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:24.491341 containerd[1574]: time="2025-08-13T07:18:24.491311525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.175113004s" Aug 13 07:18:24.491392 containerd[1574]: time="2025-08-13T07:18:24.491344554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:18:24.493354 containerd[1574]: time="2025-08-13T07:18:24.493324399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:18:24.494867 containerd[1574]: time="2025-08-13T07:18:24.494836472Z" level=info msg="CreateContainer within sandbox \"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:18:24.497616 sshd[5408]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:24.502286 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:54322.service: Deactivated successfully. Aug 13 07:18:24.505112 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:18:24.505198 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:18:24.506863 systemd-logind[1556]: Removed session 10. Aug 13 07:18:24.514787 containerd[1574]: time="2025-08-13T07:18:24.514695699Z" level=info msg="CreateContainer within sandbox \"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b8b19cb2310d5413489ac31bc95e13363d6610e65fa1e0cd806ecd7342ff4f7d\"" Aug 13 07:18:24.515755 containerd[1574]: time="2025-08-13T07:18:24.515625844Z" level=info msg="StartContainer for \"b8b19cb2310d5413489ac31bc95e13363d6610e65fa1e0cd806ecd7342ff4f7d\"" Aug 13 07:18:25.563385 containerd[1574]: time="2025-08-13T07:18:25.563197556Z" level=info msg="StartContainer for \"b8b19cb2310d5413489ac31bc95e13363d6610e65fa1e0cd806ecd7342ff4f7d\" returns successfully" Aug 13 07:18:25.568300 containerd[1574]: time="2025-08-13T07:18:25.567900876Z" level=info msg="StopPodSandbox for \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\"" Aug 13 07:18:25.591916 containerd[1574]: time="2025-08-13T07:18:25.591816793Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:25.592741 containerd[1574]: time="2025-08-13T07:18:25.592669192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:18:25.594704 containerd[1574]: time="2025-08-13T07:18:25.594662690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.10122072s" Aug 13 07:18:25.594704 containerd[1574]: time="2025-08-13T07:18:25.594705447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:18:25.596743 containerd[1574]: time="2025-08-13T07:18:25.596707352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:18:25.598599 containerd[1574]: time="2025-08-13T07:18:25.598563493Z" level=info msg="CreateContainer within sandbox \"856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:18:25.628194 containerd[1574]: time="2025-08-13T07:18:25.628123108Z" level=info msg="CreateContainer within sandbox \"856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03a89083a2c68b7706f5562593c7e1901969aff80471eb33dcdccd183ccb1e7d\"" Aug 13 07:18:25.629072 containerd[1574]: time="2025-08-13T07:18:25.629026749Z" level=info msg="StartContainer for \"03a89083a2c68b7706f5562593c7e1901969aff80471eb33dcdccd183ccb1e7d\"" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.633 [WARNING][5474] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"99eb19aa-3962-4a4e-90dc-6113f9d3975a", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce", Pod:"calico-apiserver-6dc6dbff5-vgmmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali084a3bef23c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.634 [INFO][5474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.634 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" iface="eth0" netns="" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.634 [INFO][5474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.634 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.671 [INFO][5487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.671 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.671 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.678 [WARNING][5487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.678 [INFO][5487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.680 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:25.687617 containerd[1574]: 2025-08-13 07:18:25.683 [INFO][5474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.687617 containerd[1574]: time="2025-08-13T07:18:25.687443190Z" level=info msg="TearDown network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\" successfully" Aug 13 07:18:25.687617 containerd[1574]: time="2025-08-13T07:18:25.687479576Z" level=info msg="StopPodSandbox for \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\" returns successfully" Aug 13 07:18:25.688404 containerd[1574]: time="2025-08-13T07:18:25.688353283Z" level=info msg="RemovePodSandbox for \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\"" Aug 13 07:18:25.691505 containerd[1574]: time="2025-08-13T07:18:25.691473847Z" level=info msg="Forcibly stopping sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\"" Aug 13 07:18:25.720970 containerd[1574]: time="2025-08-13T07:18:25.720251169Z" level=info msg="StartContainer for \"03a89083a2c68b7706f5562593c7e1901969aff80471eb33dcdccd183ccb1e7d\" returns successfully" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.741 [WARNING][5526] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"99eb19aa-3962-4a4e-90dc-6113f9d3975a", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"856aac8f1671ec5b38fb9020cb8e8380d65f88d40538cd5b19845c94109e27ce", Pod:"calico-apiserver-6dc6dbff5-vgmmt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali084a3bef23c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.741 [INFO][5526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.741 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" iface="eth0" netns="" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.741 [INFO][5526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.741 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.767 [INFO][5546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.767 [INFO][5546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.767 [INFO][5546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.776 [WARNING][5546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.776 [INFO][5546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" HandleID="k8s-pod-network.b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--vgmmt-eth0" Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.777 [INFO][5546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:25.783798 containerd[1574]: 2025-08-13 07:18:25.780 [INFO][5526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649" Aug 13 07:18:25.783798 containerd[1574]: time="2025-08-13T07:18:25.783759522Z" level=info msg="TearDown network for sandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\" successfully" Aug 13 07:18:25.788370 containerd[1574]: time="2025-08-13T07:18:25.788330144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:25.788506 containerd[1574]: time="2025-08-13T07:18:25.788402795Z" level=info msg="RemovePodSandbox \"b5185df7d442f2be609de5733e8c68ac25359b3f277cc6bd673ab88099124649\" returns successfully" Aug 13 07:18:25.789016 containerd[1574]: time="2025-08-13T07:18:25.788952757Z" level=info msg="StopPodSandbox for \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\"" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.825 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0", GenerateName:"calico-kube-controllers-68bb7cfb49-", Namespace:"calico-system", SelfLink:"", UID:"5c443559-5d69-4d9a-86e1-d2701af11811", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb7cfb49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857", Pod:"calico-kube-controllers-68bb7cfb49-tpxsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3e4a21b81e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.825 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.825 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" iface="eth0" netns="" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.825 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.825 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.854 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.854 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.854 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.862 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.862 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.863 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:25.870351 containerd[1574]: 2025-08-13 07:18:25.867 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.871062 containerd[1574]: time="2025-08-13T07:18:25.870418340Z" level=info msg="TearDown network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\" successfully" Aug 13 07:18:25.871062 containerd[1574]: time="2025-08-13T07:18:25.870447202Z" level=info msg="StopPodSandbox for \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\" returns successfully" Aug 13 07:18:25.871062 containerd[1574]: time="2025-08-13T07:18:25.870907363Z" level=info msg="RemovePodSandbox for \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\"" Aug 13 07:18:25.871062 containerd[1574]: time="2025-08-13T07:18:25.870934532Z" level=info msg="Forcibly stopping sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\"" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.908 [WARNING][5595] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0", GenerateName:"calico-kube-controllers-68bb7cfb49-", Namespace:"calico-system", SelfLink:"", UID:"5c443559-5d69-4d9a-86e1-d2701af11811", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb7cfb49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cd186fae94922dfc0b38b82d0502993ccc93bb24a3935771002f4177bd97857", Pod:"calico-kube-controllers-68bb7cfb49-tpxsq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3e4a21b81e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.908 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.908 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" iface="eth0" netns="" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.908 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.908 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.944 [INFO][5603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.945 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.945 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.958 [WARNING][5603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.958 [INFO][5603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" HandleID="k8s-pod-network.803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Workload="localhost-k8s-calico--kube--controllers--68bb7cfb49--tpxsq-eth0" Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.961 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:25.974813 containerd[1574]: 2025-08-13 07:18:25.969 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441" Aug 13 07:18:25.976786 containerd[1574]: time="2025-08-13T07:18:25.974868817Z" level=info msg="TearDown network for sandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\" successfully" Aug 13 07:18:25.984897 containerd[1574]: time="2025-08-13T07:18:25.984797021Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:25.985060 containerd[1574]: time="2025-08-13T07:18:25.984949626Z" level=info msg="RemovePodSandbox \"803b4aa14e9be20b2711472a9fe987181c7c2d2cf0bdedc2d3fe694117ab5441\" returns successfully" Aug 13 07:18:25.986202 containerd[1574]: time="2025-08-13T07:18:25.986096356Z" level=info msg="StopPodSandbox for \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\"" Aug 13 07:18:26.085624 systemd-journald[1159]: Under memory pressure, flushing caches. Aug 13 07:18:26.083055 systemd-resolved[1453]: Under memory pressure, flushing caches. Aug 13 07:18:26.083129 systemd-resolved[1453]: Flushed all caches. Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.080 [WARNING][5621] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" WorkloadEndpoint="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.080 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.080 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" iface="eth0" netns="" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.080 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.080 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.321 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.321 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.321 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.353 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.353 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.360 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.371916 containerd[1574]: 2025-08-13 07:18:26.367 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.372807 containerd[1574]: time="2025-08-13T07:18:26.371950101Z" level=info msg="TearDown network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\" successfully" Aug 13 07:18:26.372807 containerd[1574]: time="2025-08-13T07:18:26.372002295Z" level=info msg="StopPodSandbox for \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\" returns successfully" Aug 13 07:18:26.373247 containerd[1574]: time="2025-08-13T07:18:26.373176862Z" level=info msg="RemovePodSandbox for \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\"" Aug 13 07:18:26.373374 containerd[1574]: time="2025-08-13T07:18:26.373254673Z" level=info msg="Forcibly stopping sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\"" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.427 [WARNING][5649] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" WorkloadEndpoint="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.427 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.427 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" iface="eth0" netns="" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.427 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.427 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.453 [INFO][5658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.453 [INFO][5658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.453 [INFO][5658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.459 [WARNING][5658] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.459 [INFO][5658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" HandleID="k8s-pod-network.7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Workload="localhost-k8s-whisker--6c8c97fd--vpb85-eth0" Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.460 [INFO][5658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.467025 containerd[1574]: 2025-08-13 07:18:26.463 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba" Aug 13 07:18:26.467467 containerd[1574]: time="2025-08-13T07:18:26.467086394Z" level=info msg="TearDown network for sandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\" successfully" Aug 13 07:18:26.471608 containerd[1574]: time="2025-08-13T07:18:26.471578115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:26.471693 containerd[1574]: time="2025-08-13T07:18:26.471668889Z" level=info msg="RemovePodSandbox \"7c9703e7cade481744fc6ac21ba19cb54ddb4b03258e8f4493f09802485f26ba\" returns successfully" Aug 13 07:18:26.472302 containerd[1574]: time="2025-08-13T07:18:26.472274685Z" level=info msg="StopPodSandbox for \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\"" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.505 [WARNING][5676] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a60817e0-e119-4674-adab-2cc042d34e82", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a", Pod:"goldmane-58fd7646b9-l9xx9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01a8b362427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.505 [INFO][5676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.505 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" iface="eth0" netns="" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.505 [INFO][5676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.505 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.530 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.530 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.530 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.536 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.536 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.538 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.543770 containerd[1574]: 2025-08-13 07:18:26.540 [INFO][5676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.544340 containerd[1574]: time="2025-08-13T07:18:26.543832984Z" level=info msg="TearDown network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\" successfully" Aug 13 07:18:26.544340 containerd[1574]: time="2025-08-13T07:18:26.543867627Z" level=info msg="StopPodSandbox for \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\" returns successfully" Aug 13 07:18:26.544514 containerd[1574]: time="2025-08-13T07:18:26.544468063Z" level=info msg="RemovePodSandbox for \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\"" Aug 13 07:18:26.544538 containerd[1574]: time="2025-08-13T07:18:26.544522802Z" level=info msg="Forcibly stopping sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\"" Aug 13 07:18:26.599460 kubelet[2650]: I0813 07:18:26.599047 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dc6dbff5-vgmmt" podStartSLOduration=36.99355113 podStartE2EDuration="46.599028105s" podCreationTimestamp="2025-08-13 07:17:40 +0000 UTC" firstStartedPulling="2025-08-13 07:18:15.990156418 +0000 UTC m=+51.515225537" lastFinishedPulling="2025-08-13 07:18:25.595633393 +0000 UTC m=+61.120702512" observedRunningTime="2025-08-13 07:18:26.593742999 +0000 UTC m=+62.118812118" watchObservedRunningTime="2025-08-13 07:18:26.599028105 +0000 UTC m=+62.124097224" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.578 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a60817e0-e119-4674-adab-2cc042d34e82", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a", Pod:"goldmane-58fd7646b9-l9xx9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali01a8b362427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.579 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.579 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" iface="eth0" netns="" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.579 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.579 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.607 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.607 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.607 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.613 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.613 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" HandleID="k8s-pod-network.50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Workload="localhost-k8s-goldmane--58fd7646b9--l9xx9-eth0" Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.614 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.620607 containerd[1574]: 2025-08-13 07:18:26.617 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047" Aug 13 07:18:26.622129 containerd[1574]: time="2025-08-13T07:18:26.621957376Z" level=info msg="TearDown network for sandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\" successfully" Aug 13 07:18:26.626585 containerd[1574]: time="2025-08-13T07:18:26.626482326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:26.626672 containerd[1574]: time="2025-08-13T07:18:26.626608345Z" level=info msg="RemovePodSandbox \"50e68391742574949656c2eaef8cd0bd0cbd115bc21d5910c9ff9690d2cac047\" returns successfully" Aug 13 07:18:26.627289 containerd[1574]: time="2025-08-13T07:18:26.627248783Z" level=info msg="StopPodSandbox for \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\"" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.666 [WARNING][5731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b99jc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37049b96-5b1d-4b14-aa39-fa916253ae4c", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e", Pod:"csi-node-driver-b99jc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8717189fbbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.666 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.666 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" iface="eth0" netns="" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.666 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.666 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.690 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.690 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.690 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.697 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.697 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.700 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.706059 containerd[1574]: 2025-08-13 07:18:26.703 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.706562 containerd[1574]: time="2025-08-13T07:18:26.706110942Z" level=info msg="TearDown network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\" successfully" Aug 13 07:18:26.706562 containerd[1574]: time="2025-08-13T07:18:26.706144753Z" level=info msg="StopPodSandbox for \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\" returns successfully" Aug 13 07:18:26.706899 containerd[1574]: time="2025-08-13T07:18:26.706832918Z" level=info msg="RemovePodSandbox for \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\"" Aug 13 07:18:26.707046 containerd[1574]: time="2025-08-13T07:18:26.706916300Z" level=info msg="Forcibly stopping sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\"" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.742 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b99jc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37049b96-5b1d-4b14-aa39-fa916253ae4c", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e", Pod:"csi-node-driver-b99jc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8717189fbbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.743 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.743 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" iface="eth0" netns="" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.743 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.743 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.768 [INFO][5767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.768 [INFO][5767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.768 [INFO][5767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.774 [WARNING][5767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.775 [INFO][5767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" HandleID="k8s-pod-network.c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Workload="localhost-k8s-csi--node--driver--b99jc-eth0" Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.778 [INFO][5767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.785510 containerd[1574]: 2025-08-13 07:18:26.781 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656" Aug 13 07:18:26.786029 containerd[1574]: time="2025-08-13T07:18:26.785538152Z" level=info msg="TearDown network for sandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\" successfully" Aug 13 07:18:26.794060 containerd[1574]: time="2025-08-13T07:18:26.793996091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:26.797757 containerd[1574]: time="2025-08-13T07:18:26.797698242Z" level=info msg="RemovePodSandbox \"c67d336115fff23f7e9e673856d22636c77b3f22dad42094539e4bac29573656\" returns successfully" Aug 13 07:18:26.798430 containerd[1574]: time="2025-08-13T07:18:26.798394593Z" level=info msg="StopPodSandbox for \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\"" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.835 [WARNING][5785] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"230a398e-7dc1-4ab4-8443-e6fef0e021f2", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4", Pod:"calico-apiserver-6dc6dbff5-69nqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c18407cbdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.835 [INFO][5785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.835 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" iface="eth0" netns="" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.835 [INFO][5785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.835 [INFO][5785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.867 [INFO][5794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.867 [INFO][5794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.868 [INFO][5794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.873 [WARNING][5794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.873 [INFO][5794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.875 [INFO][5794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.881271 containerd[1574]: 2025-08-13 07:18:26.878 [INFO][5785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.881271 containerd[1574]: time="2025-08-13T07:18:26.881175683Z" level=info msg="TearDown network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\" successfully" Aug 13 07:18:26.881271 containerd[1574]: time="2025-08-13T07:18:26.881205658Z" level=info msg="StopPodSandbox for \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\" returns successfully" Aug 13 07:18:26.882671 containerd[1574]: time="2025-08-13T07:18:26.881699361Z" level=info msg="RemovePodSandbox for \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\"" Aug 13 07:18:26.882671 containerd[1574]: time="2025-08-13T07:18:26.881731079Z" level=info msg="Forcibly stopping sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\"" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.914 [WARNING][5811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0", GenerateName:"calico-apiserver-6dc6dbff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"230a398e-7dc1-4ab4-8443-e6fef0e021f2", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc6dbff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afd95302e6f5b8294c891837ba204f15aa85e8dedc59bd51ebde425a910a7ec4", Pod:"calico-apiserver-6dc6dbff5-69nqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c18407cbdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.915 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.915 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" iface="eth0" netns="" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.915 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.915 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.938 [INFO][5819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.939 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.939 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.945 [WARNING][5819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.945 [INFO][5819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" HandleID="k8s-pod-network.8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Workload="localhost-k8s-calico--apiserver--6dc6dbff5--69nqm-eth0" Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.947 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:26.954342 containerd[1574]: 2025-08-13 07:18:26.950 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f" Aug 13 07:18:26.954804 containerd[1574]: time="2025-08-13T07:18:26.954381835Z" level=info msg="TearDown network for sandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\" successfully" Aug 13 07:18:26.960157 containerd[1574]: time="2025-08-13T07:18:26.960103140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:26.960245 containerd[1574]: time="2025-08-13T07:18:26.960208561Z" level=info msg="RemovePodSandbox \"8826818cca3c1f973fcca1a8e178fb8d654fe87e7ed35ed3a3ec0133f6e0135f\" returns successfully" Aug 13 07:18:26.962214 containerd[1574]: time="2025-08-13T07:18:26.961656001Z" level=info msg="StopPodSandbox for \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\"" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.022 [WARNING][5837] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f9ce3415-4567-4b4f-85b1-ab7682c65560", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648", Pod:"coredns-7c65d6cfc9-rttzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1405a2f1a6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.024 [INFO][5837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.024 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" iface="eth0" netns="" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.024 [INFO][5837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.024 [INFO][5837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.057 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.058 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.058 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.067 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.067 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.070 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:27.086341 containerd[1574]: 2025-08-13 07:18:27.079 [INFO][5837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.086843 containerd[1574]: time="2025-08-13T07:18:27.086393877Z" level=info msg="TearDown network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\" successfully" Aug 13 07:18:27.086843 containerd[1574]: time="2025-08-13T07:18:27.086423470Z" level=info msg="StopPodSandbox for \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\" returns successfully" Aug 13 07:18:27.087142 containerd[1574]: time="2025-08-13T07:18:27.087103875Z" level=info msg="RemovePodSandbox for \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\"" Aug 13 07:18:27.087212 containerd[1574]: time="2025-08-13T07:18:27.087145261Z" level=info msg="Forcibly stopping sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\"" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.128 [WARNING][5862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f9ce3415-4567-4b4f-85b1-ab7682c65560", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"05a40fb5aa7e5ac138304191600769030de7885b4032cf572a81d1fa5382a648", Pod:"coredns-7c65d6cfc9-rttzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1405a2f1a6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.129 [INFO][5862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.129 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" iface="eth0" netns="" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.129 [INFO][5862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.129 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.228 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.229 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.229 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.237 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.237 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" HandleID="k8s-pod-network.c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Workload="localhost-k8s-coredns--7c65d6cfc9--rttzq-eth0" Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.239 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:27.248969 containerd[1574]: 2025-08-13 07:18:27.244 [INFO][5862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789" Aug 13 07:18:27.248969 containerd[1574]: time="2025-08-13T07:18:27.248942085Z" level=info msg="TearDown network for sandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\" successfully" Aug 13 07:18:27.575463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182947736.mount: Deactivated successfully. Aug 13 07:18:27.599170 kubelet[2650]: I0813 07:18:27.599124 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:27.847647 containerd[1574]: time="2025-08-13T07:18:27.847440310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:27.847647 containerd[1574]: time="2025-08-13T07:18:27.847540702Z" level=info msg="RemovePodSandbox \"c42cf1d730901717872cd735e8621219b7aae247c36000b82c91fb6d4f25c789\" returns successfully" Aug 13 07:18:27.848749 containerd[1574]: time="2025-08-13T07:18:27.848687773Z" level=info msg="StopPodSandbox for \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\"" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.894 [WARNING][5891] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af09357e-c282-465e-8e3e-c2975907b447", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7", Pod:"coredns-7c65d6cfc9-mlvdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie73c7d9beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.894 [INFO][5891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.894 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" iface="eth0" netns="" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.894 [INFO][5891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.894 [INFO][5891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.921 [INFO][5903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.921 [INFO][5903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.921 [INFO][5903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.927 [WARNING][5903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.927 [INFO][5903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.928 [INFO][5903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:27.935252 containerd[1574]: 2025-08-13 07:18:27.931 [INFO][5891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:27.935731 containerd[1574]: time="2025-08-13T07:18:27.935303164Z" level=info msg="TearDown network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\" successfully" Aug 13 07:18:27.935731 containerd[1574]: time="2025-08-13T07:18:27.935331837Z" level=info msg="StopPodSandbox for \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\" returns successfully" Aug 13 07:18:27.936693 containerd[1574]: time="2025-08-13T07:18:27.936658804Z" level=info msg="RemovePodSandbox for \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\"" Aug 13 07:18:27.936750 containerd[1574]: time="2025-08-13T07:18:27.936696843Z" level=info msg="Forcibly stopping sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\"" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:27.983 [WARNING][5921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"af09357e-c282-465e-8e3e-c2975907b447", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24336b25487e1f47a9ea37d885f5b0f12f55600d3cedc6be8b13e0e35f7f9db7", Pod:"coredns-7c65d6cfc9-mlvdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie73c7d9beac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:27.983 [INFO][5921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:27.983 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" iface="eth0" netns="" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:27.983 [INFO][5921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:27.983 [INFO][5921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:28.013 [INFO][5930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:28.014 [INFO][5930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:28.014 [INFO][5930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:28.021 [WARNING][5930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:28.021 [INFO][5930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" HandleID="k8s-pod-network.0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Workload="localhost-k8s-coredns--7c65d6cfc9--mlvdk-eth0" Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:28.022 [INFO][5930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:28.029404 containerd[1574]: 2025-08-13 07:18:28.025 [INFO][5921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2" Aug 13 07:18:28.029952 containerd[1574]: time="2025-08-13T07:18:28.029465513Z" level=info msg="TearDown network for sandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\" successfully" Aug 13 07:18:28.034779 containerd[1574]: time="2025-08-13T07:18:28.034730058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:28.034919 containerd[1574]: time="2025-08-13T07:18:28.034855166Z" level=info msg="RemovePodSandbox \"0f7c252b1894cc2a4cfe86c20170f1cd749190a177d20b1db5c31b4ece86d7d2\" returns successfully" Aug 13 07:18:28.131110 systemd-resolved[1453]: Under memory pressure, flushing caches. Aug 13 07:18:28.131146 systemd-resolved[1453]: Flushed all caches. Aug 13 07:18:28.133315 systemd-journald[1159]: Under memory pressure, flushing caches. Aug 13 07:18:28.250863 containerd[1574]: time="2025-08-13T07:18:28.250791166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:28.251599 containerd[1574]: time="2025-08-13T07:18:28.251529378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:18:28.252958 containerd[1574]: time="2025-08-13T07:18:28.252921931Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:28.255549 containerd[1574]: time="2025-08-13T07:18:28.255513635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:28.256492 containerd[1574]: time="2025-08-13T07:18:28.256451390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.659701873s" Aug 13 07:18:28.256627 containerd[1574]: time="2025-08-13T07:18:28.256502423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:18:28.258655 containerd[1574]: time="2025-08-13T07:18:28.257928447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:18:28.258896 containerd[1574]: time="2025-08-13T07:18:28.258840585Z" level=info msg="CreateContainer within sandbox \"ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:18:28.278837 containerd[1574]: time="2025-08-13T07:18:28.278776976Z" level=info msg="CreateContainer within sandbox \"ec2a12435d68a7177e629c35c8b84f3b0452706824b20bd9ce426873da57f30a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"44712fbb1b0171dcaac95b5f773ada3d2da1b0896f6ffb62c7c3331864f2c0ec\"" Aug 13 07:18:28.279406 containerd[1574]: time="2025-08-13T07:18:28.279347694Z" level=info msg="StartContainer for \"44712fbb1b0171dcaac95b5f773ada3d2da1b0896f6ffb62c7c3331864f2c0ec\"" Aug 13 07:18:28.537327 containerd[1574]: time="2025-08-13T07:18:28.537166451Z" level=info msg="StartContainer for \"44712fbb1b0171dcaac95b5f773ada3d2da1b0896f6ffb62c7c3331864f2c0ec\" returns successfully" Aug 13 07:18:28.699777 kubelet[2650]: I0813 07:18:28.699693 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-l9xx9" podStartSLOduration=33.557890765 podStartE2EDuration="45.699675095s" podCreationTimestamp="2025-08-13 07:17:43 +0000 UTC" firstStartedPulling="2025-08-13 07:18:16.115740102 +0000 UTC m=+51.640809221" lastFinishedPulling="2025-08-13 07:18:28.257524432 +0000 UTC m=+63.782593551" observedRunningTime="2025-08-13 07:18:28.698947382 +0000 UTC m=+64.224016521" watchObservedRunningTime="2025-08-13 07:18:28.699675095 +0000 UTC m=+64.224744214" Aug 13 07:18:29.505311 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:54334.service - OpenSSH per-connection server daemon (10.0.0.1:54334). Aug 13 07:18:29.553995 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 54334 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:29.556150 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:29.562007 systemd-logind[1556]: New session 11 of user core. Aug 13 07:18:29.570426 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:18:29.669790 systemd[1]: run-containerd-runc-k8s.io-b0842818f16402ee9a14babe3600b58e3a8bd0e93c40f830c1b395640cfde0e1-runc.7T1B8z.mount: Deactivated successfully. Aug 13 07:18:30.807795 sshd[6005]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:30.815250 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:55884.service - OpenSSH per-connection server daemon (10.0.0.1:55884). Aug 13 07:18:30.815799 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:54334.service: Deactivated successfully. Aug 13 07:18:30.818813 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:18:30.821169 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:18:30.822351 systemd-logind[1556]: Removed session 11. Aug 13 07:18:30.850299 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 55884 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:30.852461 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:30.857264 systemd-logind[1556]: New session 12 of user core. Aug 13 07:18:30.867174 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:18:31.038425 kubelet[2650]: I0813 07:18:31.038365 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:31.379189 sshd[6089]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:31.390148 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:55896.service - OpenSSH per-connection server daemon (10.0.0.1:55896). Aug 13 07:18:31.391091 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:55884.service: Deactivated successfully. Aug 13 07:18:31.394937 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:18:31.395664 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:18:31.396776 systemd-logind[1556]: Removed session 12. Aug 13 07:18:31.424918 sshd[6107]: Accepted publickey for core from 10.0.0.1 port 55896 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:31.426627 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:31.431101 systemd-logind[1556]: New session 13 of user core. Aug 13 07:18:31.441172 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:18:31.597716 sshd[6107]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:31.601920 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:55896.service: Deactivated successfully. Aug 13 07:18:31.607248 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:18:31.613163 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:18:31.614265 systemd-logind[1556]: Removed session 13. Aug 13 07:18:31.844775 containerd[1574]: time="2025-08-13T07:18:31.844621821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:31.846011 containerd[1574]: time="2025-08-13T07:18:31.845965291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:18:31.853238 containerd[1574]: time="2025-08-13T07:18:31.853197393Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:31.855386 containerd[1574]: time="2025-08-13T07:18:31.855360203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:31.855999 containerd[1574]: time="2025-08-13T07:18:31.855972465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.597613385s" Aug 13 07:18:31.856065 containerd[1574]: time="2025-08-13T07:18:31.856005896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:18:31.858584 containerd[1574]: time="2025-08-13T07:18:31.858535257Z" level=info msg="CreateContainer within sandbox \"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:18:31.872010 containerd[1574]: time="2025-08-13T07:18:31.871597625Z" level=info msg="CreateContainer within sandbox \"ccb8d97d068e7936fac697b5a1c874ef96621281895ff5b671741d46cb384e5e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7a86a621bb667a9578f603f723a71266fce0aabe0564b86ac4c38ac857539208\"" Aug 13 07:18:31.873441 containerd[1574]: time="2025-08-13T07:18:31.873382614Z" level=info msg="StartContainer for \"7a86a621bb667a9578f603f723a71266fce0aabe0564b86ac4c38ac857539208\"" Aug 13 07:18:31.943433 containerd[1574]: time="2025-08-13T07:18:31.943380877Z" level=info msg="StartContainer for \"7a86a621bb667a9578f603f723a71266fce0aabe0564b86ac4c38ac857539208\" returns successfully" Aug 13 07:18:32.631117 kubelet[2650]: I0813 07:18:32.631022 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b99jc" podStartSLOduration=32.674522629 podStartE2EDuration="48.630976842s" podCreationTimestamp="2025-08-13 07:17:44 +0000 UTC" firstStartedPulling="2025-08-13 07:18:15.900281309 +0000 UTC m=+51.425350428" lastFinishedPulling="2025-08-13 07:18:31.856735522 +0000 UTC m=+67.381804641" observedRunningTime="2025-08-13 07:18:32.630950325 +0000 UTC m=+68.156019444" watchObservedRunningTime="2025-08-13 07:18:32.630976842 +0000 UTC m=+68.156045961" Aug 13 07:18:32.763638 kubelet[2650]: I0813 07:18:32.763558 2650 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:18:32.763638 kubelet[2650]: I0813 07:18:32.763621 2650 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:18:33.623604 kubelet[2650]: E0813 07:18:33.623538 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:36.616161 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:55900.service - OpenSSH per-connection server daemon (10.0.0.1:55900). Aug 13 07:18:36.660842 sshd[6192]: Accepted publickey for core from 10.0.0.1 port 55900 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:36.662965 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:36.668439 systemd-logind[1556]: New session 14 of user core. Aug 13 07:18:36.678263 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:18:36.829566 sshd[6192]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:36.835411 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:55900.service: Deactivated successfully. Aug 13 07:18:36.838157 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:18:36.838248 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:18:36.839427 systemd-logind[1556]: Removed session 14. Aug 13 07:18:40.624324 kubelet[2650]: E0813 07:18:40.624239 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:41.849125 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:53376.service - OpenSSH per-connection server daemon (10.0.0.1:53376). Aug 13 07:18:41.883224 sshd[6212]: Accepted publickey for core from 10.0.0.1 port 53376 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:41.884959 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:41.888834 systemd-logind[1556]: New session 15 of user core. Aug 13 07:18:41.900150 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:18:42.010340 sshd[6212]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:42.015246 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:53376.service: Deactivated successfully. Aug 13 07:18:42.017832 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:18:42.017939 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:18:42.019159 systemd-logind[1556]: Removed session 15. Aug 13 07:18:47.019302 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:53378.service - OpenSSH per-connection server daemon (10.0.0.1:53378). Aug 13 07:18:47.066598 sshd[6229]: Accepted publickey for core from 10.0.0.1 port 53378 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:47.069835 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:47.075149 systemd-logind[1556]: New session 16 of user core. Aug 13 07:18:47.081864 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:18:47.286558 sshd[6229]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:47.291375 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:53378.service: Deactivated successfully. Aug 13 07:18:47.293830 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:18:47.293849 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:18:47.295171 systemd-logind[1556]: Removed session 16. Aug 13 07:18:52.094553 kubelet[2650]: I0813 07:18:52.094499 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:52.297101 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:48854.service - OpenSSH per-connection server daemon (10.0.0.1:48854). Aug 13 07:18:52.330126 sshd[6272]: Accepted publickey for core from 10.0.0.1 port 48854 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:52.332067 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:52.335800 systemd-logind[1556]: New session 17 of user core. Aug 13 07:18:52.344144 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:18:52.489201 sshd[6272]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:52.493483 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:48854.service: Deactivated successfully. Aug 13 07:18:52.496527 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:18:52.497197 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:18:52.498221 systemd-logind[1556]: Removed session 17. Aug 13 07:18:54.624632 kubelet[2650]: E0813 07:18:54.624582 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:57.497131 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:48856.service - OpenSSH per-connection server daemon (10.0.0.1:48856). Aug 13 07:18:57.540490 sshd[6293]: Accepted publickey for core from 10.0.0.1 port 48856 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:57.542272 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:57.546585 systemd-logind[1556]: New session 18 of user core. Aug 13 07:18:57.554209 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:18:57.825037 sshd[6293]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:57.833214 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:48858.service - OpenSSH per-connection server daemon (10.0.0.1:48858). Aug 13 07:18:57.833767 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:48856.service: Deactivated successfully. Aug 13 07:18:57.836714 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:18:57.837751 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:18:57.839862 systemd-logind[1556]: Removed session 18. Aug 13 07:18:57.869219 sshd[6305]: Accepted publickey for core from 10.0.0.1 port 48858 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:57.871232 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:57.875624 systemd-logind[1556]: New session 19 of user core. Aug 13 07:18:57.885147 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:18:58.083059 systemd-resolved[1453]: Under memory pressure, flushing caches. Aug 13 07:18:58.083103 systemd-resolved[1453]: Flushed all caches. Aug 13 07:18:58.084910 systemd-journald[1159]: Under memory pressure, flushing caches. Aug 13 07:18:58.203148 sshd[6305]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:58.212506 systemd[1]: Started sshd@19-10.0.0.149:22-10.0.0.1:48870.service - OpenSSH per-connection server daemon (10.0.0.1:48870). Aug 13 07:18:58.213185 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:48858.service: Deactivated successfully. Aug 13 07:18:58.216120 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:18:58.218173 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:18:58.219857 systemd-logind[1556]: Removed session 19. Aug 13 07:18:58.248301 sshd[6319]: Accepted publickey for core from 10.0.0.1 port 48870 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:18:58.250265 sshd[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:58.254658 systemd-logind[1556]: New session 20 of user core. Aug 13 07:18:58.260147 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:18:59.623666 kubelet[2650]: E0813 07:18:59.623615 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:00.078137 sshd[6319]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:00.095624 systemd[1]: Started sshd@20-10.0.0.149:22-10.0.0.1:33524.service - OpenSSH per-connection server daemon (10.0.0.1:33524). Aug 13 07:19:00.099255 systemd[1]: sshd@19-10.0.0.149:22-10.0.0.1:48870.service: Deactivated successfully. Aug 13 07:19:00.107439 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:19:00.112767 systemd-logind[1556]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:19:00.114796 systemd-logind[1556]: Removed session 20. Aug 13 07:19:00.179396 sshd[6381]: Accepted publickey for core from 10.0.0.1 port 33524 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:19:00.181288 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:00.186411 systemd-logind[1556]: New session 21 of user core. Aug 13 07:19:00.191197 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:19:00.774208 sshd[6381]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:00.786468 systemd[1]: Started sshd@21-10.0.0.149:22-10.0.0.1:33536.service - OpenSSH per-connection server daemon (10.0.0.1:33536). Aug 13 07:19:00.788145 systemd[1]: sshd@20-10.0.0.149:22-10.0.0.1:33524.service: Deactivated successfully. Aug 13 07:19:00.796378 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:19:00.798471 systemd-logind[1556]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:19:00.802438 systemd-logind[1556]: Removed session 21. Aug 13 07:19:00.824785 sshd[6396]: Accepted publickey for core from 10.0.0.1 port 33536 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:19:00.826494 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:00.831444 systemd-logind[1556]: New session 22 of user core. Aug 13 07:19:00.841185 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:19:00.950748 sshd[6396]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:00.955710 systemd[1]: sshd@21-10.0.0.149:22-10.0.0.1:33536.service: Deactivated successfully. Aug 13 07:19:00.958809 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:19:00.959567 systemd-logind[1556]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:19:00.961062 systemd-logind[1556]: Removed session 22. Aug 13 07:19:05.963115 systemd[1]: Started sshd@22-10.0.0.149:22-10.0.0.1:33542.service - OpenSSH per-connection server daemon (10.0.0.1:33542). Aug 13 07:19:05.995561 sshd[6422]: Accepted publickey for core from 10.0.0.1 port 33542 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:19:05.997051 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:06.000675 systemd-logind[1556]: New session 23 of user core. Aug 13 07:19:06.010172 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:19:06.120208 sshd[6422]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:06.124616 systemd[1]: sshd@22-10.0.0.149:22-10.0.0.1:33542.service: Deactivated successfully. Aug 13 07:19:06.127318 systemd-logind[1556]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:19:06.127381 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:19:06.128606 systemd-logind[1556]: Removed session 23. Aug 13 07:19:10.624236 kubelet[2650]: E0813 07:19:10.624186 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:19:11.134107 systemd[1]: Started sshd@23-10.0.0.149:22-10.0.0.1:59964.service - OpenSSH per-connection server daemon (10.0.0.1:59964). Aug 13 07:19:11.168152 sshd[6437]: Accepted publickey for core from 10.0.0.1 port 59964 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:19:11.169922 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:11.175042 systemd-logind[1556]: New session 24 of user core. Aug 13 07:19:11.183346 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:19:11.328231 sshd[6437]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:11.333298 systemd-logind[1556]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:19:11.338235 systemd[1]: sshd@23-10.0.0.149:22-10.0.0.1:59964.service: Deactivated successfully. Aug 13 07:19:11.344040 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:19:11.346034 systemd-logind[1556]: Removed session 24. Aug 13 07:19:16.327111 systemd[1]: Started sshd@24-10.0.0.149:22-10.0.0.1:59978.service - OpenSSH per-connection server daemon (10.0.0.1:59978). Aug 13 07:19:16.361522 sshd[6452]: Accepted publickey for core from 10.0.0.1 port 59978 ssh2: RSA SHA256:+DcVhnpRDsBWp3H5IUXcw71JLKRCmgP+N/m7GkGrueA Aug 13 07:19:16.363067 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:16.367518 systemd-logind[1556]: New session 25 of user core. Aug 13 07:19:16.373164 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:19:16.504552 sshd[6452]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:16.509140 systemd[1]: sshd@24-10.0.0.149:22-10.0.0.1:59978.service: Deactivated successfully. Aug 13 07:19:16.511781 systemd-logind[1556]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:19:16.511917 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:19:16.513159 systemd-logind[1556]: Removed session 25.