Jul 6 23:55:54.827848 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:55:54.827868 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:54.827875 kernel: BIOS-provided physical RAM map: Jul 6 23:55:54.827880 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:55:54.827885 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:55:54.827889 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:55:54.827895 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jul 6 23:55:54.827900 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jul 6 23:55:54.827906 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:55:54.827910 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:55:54.827915 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:55:54.827919 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:55:54.827924 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:55:54.827928 kernel: NX (Execute Disable) protection: active Jul 6 23:55:54.827935 kernel: APIC: Static calls initialized Jul 6 23:55:54.827940 kernel: SMBIOS 3.0.0 present. Jul 6 23:55:54.827946 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jul 6 23:55:54.827950 kernel: Hypervisor detected: KVM Jul 6 23:55:54.827955 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:55:54.827960 kernel: kvm-clock: using sched offset of 2931987856 cycles Jul 6 23:55:54.827965 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:55:54.827970 kernel: tsc: Detected 2445.404 MHz processor Jul 6 23:55:54.827976 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:55:54.827982 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:55:54.827987 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jul 6 23:55:54.827992 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:55:54.827997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:55:54.828002 kernel: Using GB pages for direct mapping Jul 6 23:55:54.828007 kernel: ACPI: Early table checksum verification disabled Jul 6 23:55:54.828012 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Jul 6 23:55:54.828017 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.828022 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.828029 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.828034 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jul 6 23:55:54.828039 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.828044 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.828049 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.828054 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:54.828059 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Jul 6 23:55:54.828064 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Jul 6 23:55:54.828073 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jul 6 23:55:54.828078 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Jul 6 23:55:54.828083 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Jul 6 23:55:54.828089 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Jul 6 23:55:54.828094 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Jul 6 23:55:54.828099 kernel: No NUMA configuration found Jul 6 23:55:54.828104 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jul 6 23:55:54.828111 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jul 6 23:55:54.828116 kernel: Zone ranges: Jul 6 23:55:54.828122 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:55:54.828127 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jul 6 23:55:54.828132 kernel: Normal empty Jul 6 23:55:54.828137 kernel: Movable zone start for each node Jul 6 23:55:54.828142 kernel: Early memory node ranges Jul 6 23:55:54.828147 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:55:54.828152 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jul 6 23:55:54.828159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jul 6 23:55:54.828164 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:55:54.828169 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:55:54.828175 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 6 23:55:54.828180 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:55:54.828185 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:55:54.828190 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:55:54.828195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:55:54.828201 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:55:54.828207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:55:54.828213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:55:54.828218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:55:54.828223 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:55:54.828228 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:55:54.828233 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:55:54.828239 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:55:54.828244 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:55:54.828249 kernel: Booting paravirtualized kernel on KVM Jul 6 23:55:54.828256 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:55:54.828270 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:55:54.828276 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:55:54.828281 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:55:54.828286 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:55:54.828291 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 6 23:55:54.828298 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:54.828304 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:55:54.828309 kernel: random: crng init done Jul 6 23:55:54.828315 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:55:54.828321 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:55:54.828326 kernel: Fallback order for Node 0: 0 Jul 6 23:55:54.828331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jul 6 23:55:54.828336 kernel: Policy zone: DMA32 Jul 6 23:55:54.828341 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:55:54.828347 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 125152K reserved, 0K cma-reserved) Jul 6 23:55:54.828352 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:55:54.828358 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:55:54.828364 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:55:54.828369 kernel: Dynamic Preempt: voluntary Jul 6 23:55:54.828375 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:55:54.828380 kernel: rcu: RCU event tracing is enabled. Jul 6 23:55:54.828386 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:55:54.828391 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:55:54.828397 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:55:54.828402 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:55:54.828408 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:55:54.828414 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:55:54.828419 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:55:54.828425 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:55:54.828430 kernel: Console: colour VGA+ 80x25 Jul 6 23:55:54.828435 kernel: printk: console [tty0] enabled Jul 6 23:55:54.828440 kernel: printk: console [ttyS0] enabled Jul 6 23:55:54.828445 kernel: ACPI: Core revision 20230628 Jul 6 23:55:54.828451 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:55:54.828456 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:55:54.828462 kernel: x2apic enabled Jul 6 23:55:54.828468 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:55:54.828473 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:55:54.828478 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:55:54.828483 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jul 6 23:55:54.828489 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:55:54.828494 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:55:54.828499 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:55:54.828511 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:55:54.828516 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:55:54.828522 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:55:54.828527 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:55:54.828534 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:55:54.828720 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:55:54.828729 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:55:54.828736 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:55:54.828742 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:55:54.828750 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:55:54.828755 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:55:54.828761 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:55:54.828767 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:55:54.828773 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:55:54.829006 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:55:54.829015 kernel: landlock: Up and running. Jul 6 23:55:54.829021 kernel: SELinux: Initializing. Jul 6 23:55:54.829029 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.829035 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.829041 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:55:54.829047 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:55:54.829052 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:55:54.829058 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:55:54.829064 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:55:54.829069 kernel: ... version: 0 Jul 6 23:55:54.829075 kernel: ... bit width: 48 Jul 6 23:55:54.829082 kernel: ... generic registers: 6 Jul 6 23:55:54.829088 kernel: ... value mask: 0000ffffffffffff Jul 6 23:55:54.829093 kernel: ... max period: 00007fffffffffff Jul 6 23:55:54.829099 kernel: ... fixed-purpose events: 0 Jul 6 23:55:54.829104 kernel: ... event mask: 000000000000003f Jul 6 23:55:54.829110 kernel: signal: max sigframe size: 1776 Jul 6 23:55:54.829115 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:55:54.829121 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:55:54.829127 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:55:54.829134 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:55:54.829139 kernel: .... node #0, CPUs: #1 Jul 6 23:55:54.829145 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:55:54.829151 kernel: smpboot: Max logical packages: 1 Jul 6 23:55:54.829156 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jul 6 23:55:54.829162 kernel: devtmpfs: initialized Jul 6 23:55:54.829167 kernel: x86/mm: Memory block size: 128MB Jul 6 23:55:54.829173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:55:54.829179 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.829184 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:55:54.829191 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:55:54.829197 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:55:54.829202 kernel: audit: type=2000 audit(1751846154.430:1): state=initialized audit_enabled=0 res=1 Jul 6 23:55:54.829208 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:55:54.829213 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:55:54.829219 kernel: cpuidle: using governor menu Jul 6 23:55:54.829225 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:55:54.829230 kernel: dca service started, version 1.12.1 Jul 6 23:55:54.829236 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:55:54.829243 kernel: PCI: Using configuration type 1 for base access Jul 6 23:55:54.829248 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:55:54.829254 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:55:54.829272 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:55:54.829278 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:55:54.829284 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:55:54.829290 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:55:54.829295 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:55:54.829302 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:55:54.829308 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:55:54.829314 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:55:54.829319 kernel: ACPI: Interpreter enabled Jul 6 23:55:54.829325 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:55:54.829330 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:55:54.829336 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:55:54.829341 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:55:54.829347 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:55:54.829353 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:55:54.829464 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:55:54.829535 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:55:54.829623 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:55:54.829632 kernel: PCI host bridge to bus 0000:00 Jul 6 23:55:54.829702 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:55:54.829758 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:55:54.829818 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:55:54.829872 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jul 6 23:55:54.829925 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:54.829978 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:54.830032 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:55:54.830105 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:55:54.830174 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jul 6 23:55:54.830242 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jul 6 23:55:54.830320 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jul 6 23:55:54.830383 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jul 6 23:55:54.830445 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jul 6 23:55:54.830507 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:55:54.830653 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.830729 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jul 6 23:55:54.830799 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.830862 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jul 6 23:55:54.830929 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.830991 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jul 6 23:55:54.831059 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.831125 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jul 6 23:55:54.831194 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.831255 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jul 6 23:55:54.831342 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.831405 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jul 6 23:55:54.831473 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.831553 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jul 6 23:55:54.831628 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.831692 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jul 6 23:55:54.831785 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:55:54.831856 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jul 6 23:55:54.831925 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:55:54.832031 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:55:54.832152 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:55:54.832247 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jul 6 23:55:54.832327 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jul 6 23:55:54.832395 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:55:54.832457 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:55:54.832528 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:55:54.832645 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jul 6 23:55:54.832713 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 6 23:55:54.832777 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jul 6 23:55:54.832839 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 6 23:55:54.832901 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 6 23:55:54.832962 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 6 23:55:54.833036 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 6 23:55:54.833105 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jul 6 23:55:54.833168 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 6 23:55:54.833228 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 6 23:55:54.833306 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 6 23:55:54.833378 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 6 23:55:54.833442 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jul 6 23:55:54.833510 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jul 6 23:55:54.833616 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 6 23:55:54.833691 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 6 23:55:54.833752 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 6 23:55:54.833820 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 6 23:55:54.833883 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 6 23:55:54.833943 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 6 23:55:54.834003 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 6 23:55:54.834066 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 6 23:55:54.834135 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 6 23:55:54.834199 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jul 6 23:55:54.834274 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jul 6 23:55:54.834340 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 6 23:55:54.834401 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 6 23:55:54.834461 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 6 23:55:54.834534 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 6 23:55:54.834875 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jul 6 23:55:54.834942 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jul 6 23:55:54.835005 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 6 23:55:54.835068 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 6 23:55:54.835127 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 6 23:55:54.835136 kernel: acpiphp: Slot [0] registered Jul 6 23:55:54.835254 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:55:54.835339 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jul 6 23:55:54.835404 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jul 6 23:55:54.835466 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jul 6 23:55:54.835528 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 6 23:55:54.835608 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 6 23:55:54.835669 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 6 23:55:54.835677 kernel: acpiphp: Slot [0-2] registered Jul 6 23:55:54.835741 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 6 23:55:54.835801 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 6 23:55:54.835861 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 6 23:55:54.835869 kernel: acpiphp: Slot [0-3] registered Jul 6 23:55:54.835927 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 6 23:55:54.835987 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 6 23:55:54.836046 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 6 23:55:54.836054 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:55:54.836060 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:55:54.836068 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:55:54.836074 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:55:54.836079 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:55:54.836085 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:55:54.836091 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:55:54.836096 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:55:54.836102 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:55:54.836107 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:55:54.836113 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:55:54.836120 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:55:54.836126 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:55:54.836131 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:55:54.836137 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:55:54.836142 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:55:54.836148 kernel: iommu: Default domain type: Translated Jul 6 23:55:54.836154 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:55:54.836159 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:55:54.836165 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:55:54.836172 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:55:54.836178 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jul 6 23:55:54.836251 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:55:54.836326 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:55:54.836387 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:55:54.836395 kernel: vgaarb: loaded Jul 6 23:55:54.836401 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:55:54.836407 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:55:54.836415 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:55:54.836421 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:55:54.836427 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:55:54.836432 kernel: pnp: PnP ACPI init Jul 6 23:55:54.836499 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:55:54.836509 kernel: pnp: PnP ACPI: found 5 devices Jul 6 23:55:54.836514 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:55:54.836520 kernel: NET: Registered PF_INET protocol family Jul 6 23:55:54.836526 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:55:54.836534 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:55:54.836578 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:55:54.836585 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:55:54.836591 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:55:54.836597 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:55:54.836602 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.836608 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:54.836614 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:55:54.836622 kernel: NET: Registered PF_XDP protocol family Jul 6 23:55:54.836691 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:55:54.836753 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:55:54.836814 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:55:54.836875 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jul 6 23:55:54.836934 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jul 6 23:55:54.836993 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jul 6 23:55:54.837057 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 6 23:55:54.837117 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jul 6 23:55:54.837177 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 6 23:55:54.837237 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 6 23:55:54.837311 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jul 6 23:55:54.837372 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 6 23:55:54.837499 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 6 23:55:54.837580 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jul 6 23:55:54.837642 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 6 23:55:54.837708 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 6 23:55:54.837768 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jul 6 23:55:54.837828 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 6 23:55:54.837888 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 6 23:55:54.837950 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jul 6 23:55:54.838010 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 6 23:55:54.838075 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 6 23:55:54.838149 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jul 6 23:55:54.838215 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 6 23:55:54.838290 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 6 23:55:54.838353 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jul 6 23:55:54.838432 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jul 6 23:55:54.838507 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 6 23:55:54.838629 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 6 23:55:54.838694 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jul 6 23:55:54.840647 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jul 6 23:55:54.840722 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 6 23:55:54.840792 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 6 23:55:54.840856 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jul 6 23:55:54.840919 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 6 23:55:54.840987 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 6 23:55:54.841046 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:55:54.841100 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:55:54.841153 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:54.841206 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jul 6 23:55:54.841274 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:54.841331 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:54.841403 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 6 23:55:54.841462 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jul 6 23:55:54.841534 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 6 23:55:54.841672 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 6 23:55:54.841746 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 6 23:55:54.841833 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 6 23:55:54.841911 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 6 23:55:54.841997 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 6 23:55:54.842063 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 6 23:55:54.842121 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 6 23:55:54.842208 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jul 6 23:55:54.842291 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 6 23:55:54.842362 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jul 6 23:55:54.842419 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 6 23:55:54.842474 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 6 23:55:54.842535 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jul 6 23:55:54.844657 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jul 6 23:55:54.844719 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 6 23:55:54.844790 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jul 6 23:55:54.844854 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 6 23:55:54.844912 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 6 23:55:54.844921 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:55:54.844928 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:55:54.844935 kernel: Initialise system trusted keyrings Jul 6 23:55:54.844941 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:55:54.844947 kernel: Key type asymmetric registered Jul 6 23:55:54.844953 kernel: Asymmetric key parser 'x509' registered Jul 6 23:55:54.844961 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:55:54.844967 kernel: io scheduler mq-deadline registered Jul 6 23:55:54.844974 kernel: io scheduler kyber registered Jul 6 23:55:54.844980 kernel: io scheduler bfq registered Jul 6 23:55:54.845046 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 6 23:55:54.845113 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 6 23:55:54.845176 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 6 23:55:54.845238 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 6 23:55:54.845315 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 6 23:55:54.845383 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 6 23:55:54.845446 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 6 23:55:54.845508 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 6 23:55:54.846629 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 6 23:55:54.846702 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 6 23:55:54.846767 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 6 23:55:54.846828 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 6 23:55:54.846890 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 6 23:55:54.846956 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 6 23:55:54.847017 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 6 23:55:54.847077 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 6 23:55:54.847086 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:55:54.847144 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jul 6 23:55:54.847203 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jul 6 23:55:54.847212 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:55:54.847218 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jul 6 23:55:54.847227 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:55:54.847233 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:55:54.847240 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:55:54.847246 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:55:54.847252 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:55:54.847332 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 6 23:55:54.847343 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:55:54.847399 kernel: rtc_cmos 00:03: registered as rtc0 Jul 6 23:55:54.847458 kernel: rtc_cmos 00:03: setting system clock to 2025-07-06T23:55:54 UTC (1751846154) Jul 6 23:55:54.847515 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:55:54.847524 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:55:54.847530 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:55:54.847537 kernel: Segment Routing with IPv6 Jul 6 23:55:54.847950 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:55:54.847957 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:55:54.847964 kernel: Key type dns_resolver registered Jul 6 23:55:54.847970 kernel: IPI shorthand broadcast: enabled Jul 6 23:55:54.847979 kernel: sched_clock: Marking stable (1017006953, 139287212)->(1163441426, -7147261) Jul 6 23:55:54.847985 kernel: registered taskstats version 1 Jul 6 23:55:54.847991 kernel: Loading compiled-in X.509 certificates Jul 6 23:55:54.847997 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:55:54.848030 kernel: Key type .fscrypt registered Jul 6 23:55:54.848036 kernel: Key type fscrypt-provisioning registered Jul 6 23:55:54.848042 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:55:54.848048 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:55:54.848055 kernel: ima: No architecture policies found Jul 6 23:55:54.848063 kernel: clk: Disabling unused clocks Jul 6 23:55:54.848069 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:55:54.848075 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:55:54.848081 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:55:54.848087 kernel: Run /init as init process Jul 6 23:55:54.848095 kernel: with arguments: Jul 6 23:55:54.848101 kernel: /init Jul 6 23:55:54.848107 kernel: with environment: Jul 6 23:55:54.848113 kernel: HOME=/ Jul 6 23:55:54.848119 kernel: TERM=linux Jul 6 23:55:54.848125 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:55:54.848135 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:54.848143 systemd[1]: Detected virtualization kvm. Jul 6 23:55:54.848150 systemd[1]: Detected architecture x86-64. Jul 6 23:55:54.848156 systemd[1]: Running in initrd. Jul 6 23:55:54.848162 systemd[1]: No hostname configured, using default hostname. Jul 6 23:55:54.848168 systemd[1]: Hostname set to . Jul 6 23:55:54.848176 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:55:54.848182 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:55:54.848189 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:54.848195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:54.848203 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:55:54.848209 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:54.848216 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:55:54.848223 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:55:54.848231 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:55:54.848237 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:55:54.848244 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:54.848250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:54.848257 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:54.848272 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:54.848280 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:54.848287 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:54.848293 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:54.848299 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:54.848306 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:54.848312 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:54.848319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:54.848325 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:54.848331 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:54.848339 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:54.848346 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:55:54.848352 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:54.848359 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:55:54.848365 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:55:54.848371 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:54.848378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:54.848384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:54.848407 systemd-journald[187]: Collecting audit messages is disabled. Jul 6 23:55:54.848424 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:54.848431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:54.848437 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:55:54.848446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:54.848453 systemd-journald[187]: Journal started Jul 6 23:55:54.848468 systemd-journald[187]: Runtime Journal (/run/log/journal/da719bce1a1b4defa66fd899211b2fca) is 4.8M, max 38.4M, 33.6M free. Jul 6 23:55:54.839612 systemd-modules-load[188]: Inserted module 'overlay' Jul 6 23:55:54.867069 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:55:54.867094 kernel: Bridge firewalling registered Jul 6 23:55:54.866734 systemd-modules-load[188]: Inserted module 'br_netfilter' Jul 6 23:55:54.895557 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:54.895641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:54.896858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:54.897501 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:54.902673 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:54.904025 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:54.906509 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:54.914843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:54.916307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:54.920923 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:54.922871 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:54.924082 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:54.929655 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:55:54.932299 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:54.936143 dracut-cmdline[223]: dracut-dracut-053 Jul 6 23:55:54.940049 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:54.956314 systemd-resolved[224]: Positive Trust Anchors: Jul 6 23:55:54.956634 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:54.956660 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:54.958485 systemd-resolved[224]: Defaulting to hostname 'linux'. Jul 6 23:55:54.965465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:54.966154 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:54.992574 kernel: SCSI subsystem initialized Jul 6 23:55:55.000558 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:55:55.009578 kernel: iscsi: registered transport (tcp) Jul 6 23:55:55.025607 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:55:55.025638 kernel: QLogic iSCSI HBA Driver Jul 6 23:55:55.048665 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:55.054650 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:55:55.072592 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:55:55.072626 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:55:55.072638 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:55:55.109560 kernel: raid6: avx2x4 gen() 37035 MB/s Jul 6 23:55:55.126561 kernel: raid6: avx2x2 gen() 34287 MB/s Jul 6 23:55:55.143669 kernel: raid6: avx2x1 gen() 28633 MB/s Jul 6 23:55:55.143700 kernel: raid6: using algorithm avx2x4 gen() 37035 MB/s Jul 6 23:55:55.161779 kernel: raid6: .... xor() 4869 MB/s, rmw enabled Jul 6 23:55:55.161809 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:55:55.178568 kernel: xor: automatically using best checksumming function avx Jul 6 23:55:55.289574 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:55:55.297450 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:55.302676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:55.311680 systemd-udevd[407]: Using default interface naming scheme 'v255'. Jul 6 23:55:55.314488 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:55.321673 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:55:55.329686 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jul 6 23:55:55.353507 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:55.360789 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:55.399480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:55.404677 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:55.419487 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:55.421119 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:55.422478 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:55.424186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:55.431638 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:55.444967 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:55.467611 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:55.473564 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:55:55.480652 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 6 23:55:55.501071 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:55.530958 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:55.531053 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:55.531101 kernel: ACPI: bus type USB registered Jul 6 23:55:55.531161 kernel: usbcore: registered new interface driver usbfs Jul 6 23:55:55.531194 kernel: usbcore: registered new interface driver hub Jul 6 23:55:55.531239 kernel: usbcore: registered new device driver usb Jul 6 23:55:55.501222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:55.532626 kernel: libata version 3.00 loaded. Jul 6 23:55:55.532283 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:55.534133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:55.534296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:55.534977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:55.541929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:55.550696 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:55:55.550859 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:55:55.554820 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:55:55.554953 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:55:55.580569 kernel: scsi host1: ahci Jul 6 23:55:55.582575 kernel: scsi host2: ahci Jul 6 23:55:55.585849 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:55:55.586024 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 6 23:55:55.586111 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 6 23:55:55.586191 kernel: scsi host3: ahci Jul 6 23:55:55.586309 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:55:55.586393 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 6 23:55:55.586470 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 6 23:55:55.590476 kernel: scsi host4: ahci Jul 6 23:55:55.590765 kernel: hub 1-0:1.0: USB hub found Jul 6 23:55:55.591760 kernel: hub 1-0:1.0: 4 ports detected Jul 6 23:55:55.591873 kernel: scsi host5: ahci Jul 6 23:55:55.592721 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 6 23:55:55.592843 kernel: hub 2-0:1.0: USB hub found Jul 6 23:55:55.592934 kernel: hub 2-0:1.0: 4 ports detected Jul 6 23:55:55.596927 kernel: scsi host6: ahci Jul 6 23:55:55.597046 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jul 6 23:55:55.597061 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jul 6 23:55:55.597070 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jul 6 23:55:55.597077 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jul 6 23:55:55.597085 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jul 6 23:55:55.597092 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jul 6 23:55:55.602569 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 6 23:55:55.602744 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 6 23:55:55.602871 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:55:55.602994 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 6 23:55:55.603104 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 6 23:55:55.604590 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:55:55.604611 kernel: GPT:17805311 != 80003071 Jul 6 23:55:55.604619 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:55:55.604628 kernel: GPT:17805311 != 80003071 Jul 6 23:55:55.604635 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:55:55.604641 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:55.604649 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:55:55.648707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:55.656762 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:55.672397 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:55.832591 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 6 23:55:55.914559 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.914646 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:55:55.914659 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.914669 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.914679 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:55:55.914688 kernel: ata1.00: applying bridge limits Jul 6 23:55:55.917036 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.917145 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:55.918562 kernel: ata1.00: configured for UDMA/100 Jul 6 23:55:55.920954 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:55:55.973052 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:55:55.973234 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:55.981566 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (456) Jul 6 23:55:55.985558 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (464) Jul 6 23:55:55.987021 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 6 23:55:55.989445 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:55:56.000626 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:55:56.003311 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 6 23:55:56.005606 kernel: usbcore: registered new interface driver usbhid Jul 6 23:55:56.005635 kernel: usbhid: USB HID core driver Jul 6 23:55:56.012560 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jul 6 23:55:56.014698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:55:56.017824 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 6 23:55:56.020439 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 6 23:55:56.021004 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 6 23:55:56.029778 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:56.035520 disk-uuid[576]: Primary Header is updated. Jul 6 23:55:56.035520 disk-uuid[576]: Secondary Entries is updated. Jul 6 23:55:56.035520 disk-uuid[576]: Secondary Header is updated. Jul 6 23:55:56.050572 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:56.055565 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:56.067569 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:57.062638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:57.063487 disk-uuid[578]: The operation has completed successfully. Jul 6 23:55:57.104445 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:57.104553 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:57.116653 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:57.119042 sh[598]: Success Jul 6 23:55:57.129568 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:55:57.170471 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:57.182783 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:57.183406 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:57.204622 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:57.204677 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:57.204700 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:57.206797 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:57.208500 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:57.216580 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:55:57.218826 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:57.220285 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:57.230686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:57.234713 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:57.245589 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:57.245625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:57.248288 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:57.254451 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:57.254476 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:57.262644 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:57.265642 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:57.269355 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:57.273693 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:57.307565 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:57.319425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:57.335990 ignition[722]: Ignition 2.19.0 Jul 6 23:55:57.336739 ignition[722]: Stage: fetch-offline Jul 6 23:55:57.336773 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.336780 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.336853 ignition[722]: parsed url from cmdline: "" Jul 6 23:55:57.336856 ignition[722]: no config URL provided Jul 6 23:55:57.336860 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:57.339500 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:57.336869 ignition[722]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:57.336874 ignition[722]: failed to fetch config: resource requires networking Jul 6 23:55:57.337039 ignition[722]: Ignition finished successfully Jul 6 23:55:57.346813 systemd-networkd[779]: lo: Link UP Jul 6 23:55:57.346822 systemd-networkd[779]: lo: Gained carrier Jul 6 23:55:57.348421 systemd-networkd[779]: Enumeration completed Jul 6 23:55:57.348485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:57.349348 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:57.349351 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:57.349897 systemd[1]: Reached target network.target - Network. Jul 6 23:55:57.350693 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:57.350696 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:57.351705 systemd-networkd[779]: eth0: Link UP Jul 6 23:55:57.351708 systemd-networkd[779]: eth0: Gained carrier Jul 6 23:55:57.351715 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:57.358786 systemd-networkd[779]: eth1: Link UP Jul 6 23:55:57.358791 systemd-networkd[779]: eth1: Gained carrier Jul 6 23:55:57.358797 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:57.358859 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:55:57.369703 ignition[787]: Ignition 2.19.0 Jul 6 23:55:57.369712 ignition[787]: Stage: fetch Jul 6 23:55:57.369848 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.369856 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.369921 ignition[787]: parsed url from cmdline: "" Jul 6 23:55:57.369924 ignition[787]: no config URL provided Jul 6 23:55:57.369928 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:57.369933 ignition[787]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:57.369948 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 6 23:55:57.370062 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 6 23:55:57.389599 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:55:57.420605 systemd-networkd[779]: eth0: DHCPv4 address 157.180.92.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:55:57.570315 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 6 23:55:57.574114 ignition[787]: GET result: OK Jul 6 23:55:57.574179 ignition[787]: parsing config with SHA512: 53c7f79e053e9b9658a8a7d49d88e312eb9fa5c4d1a6acd38c49e82e3b31c6ec387cc622b8caecbd2af4db5b0b7b7db8e5b284b5f4692d471746cd61833b86f8 Jul 6 23:55:57.577938 unknown[787]: fetched base config from "system" Jul 6 23:55:57.577947 unknown[787]: fetched base config from "system" Jul 6 23:55:57.578610 ignition[787]: fetch: fetch complete Jul 6 23:55:57.577951 unknown[787]: fetched user config from "hetzner" Jul 6 23:55:57.578616 ignition[787]: fetch: fetch passed Jul 6 23:55:57.580720 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:55:57.578675 ignition[787]: Ignition finished successfully Jul 6 23:55:57.585672 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:57.598323 ignition[795]: Ignition 2.19.0 Jul 6 23:55:57.598335 ignition[795]: Stage: kargs Jul 6 23:55:57.598531 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.601527 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:57.599382 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.600574 ignition[795]: kargs: kargs passed Jul 6 23:55:57.600636 ignition[795]: Ignition finished successfully Jul 6 23:55:57.608706 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:57.618523 ignition[801]: Ignition 2.19.0 Jul 6 23:55:57.618554 ignition[801]: Stage: disks Jul 6 23:55:57.618842 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.624149 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:57.618853 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.625348 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:57.619613 ignition[801]: disks: disks passed Jul 6 23:55:57.626143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:57.619652 ignition[801]: Ignition finished successfully Jul 6 23:55:57.627227 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:57.628377 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:57.629521 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:57.636693 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:57.648347 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:55:57.651026 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:57.655615 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:57.725571 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:57.726007 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:57.726821 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:57.732601 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:57.735087 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:57.738706 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:55:57.740756 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:57.748428 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (818) Jul 6 23:55:57.748450 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:57.741658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:57.752286 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:57.752309 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:57.754405 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:57.764596 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:57.764648 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:57.765721 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:57.771709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:57.787169 coreos-metadata[820]: Jul 06 23:55:57.787 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 6 23:55:57.788315 coreos-metadata[820]: Jul 06 23:55:57.788 INFO Fetch successful Jul 6 23:55:57.788990 coreos-metadata[820]: Jul 06 23:55:57.788 INFO wrote hostname ci-4081-3-4-2-e8b158d58b to /sysroot/etc/hostname Jul 6 23:55:57.791534 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:55:57.801931 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:57.806173 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:57.809424 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:57.812248 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:57.875101 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:57.879627 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:57.882700 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:57.887567 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:57.907942 ignition[935]: INFO : Ignition 2.19.0 Jul 6 23:55:57.909213 ignition[935]: INFO : Stage: mount Jul 6 23:55:57.909213 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:57.909213 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:57.908336 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:57.911826 ignition[935]: INFO : mount: mount passed Jul 6 23:55:57.911826 ignition[935]: INFO : Ignition finished successfully Jul 6 23:55:57.910056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:57.914687 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:58.201653 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:58.206745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:58.216577 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Jul 6 23:55:58.219638 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:58.219665 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:58.222209 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:58.227474 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:58.227499 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:55:58.229874 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:58.253828 ignition[962]: INFO : Ignition 2.19.0 Jul 6 23:55:58.253828 ignition[962]: INFO : Stage: files Jul 6 23:55:58.255214 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:58.255214 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:55:58.255214 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:58.257889 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:58.257889 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:58.259966 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:58.259966 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:58.259966 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:58.258355 unknown[962]: wrote ssh authorized keys file for user: core Jul 6 23:55:58.263139 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:55:58.263139 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:58.420721 systemd-networkd[779]: eth1: Gained IPv6LL Jul 6 23:55:58.460687 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:58.740694 systemd-networkd[779]: eth0: Gained IPv6LL Jul 6 23:56:00.707216 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:56:00.709094 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:56:01.521157 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:56:01.718844 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:56:01.718844 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:56:01.722597 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:56:01.722597 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:56:01.722597 ignition[962]: INFO : files: files passed Jul 6 23:56:01.722597 ignition[962]: INFO : Ignition finished successfully Jul 6 23:56:01.722598 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:56:01.733749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:56:01.737628 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:56:01.743572 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:56:01.744352 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:56:01.753330 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:56:01.753330 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:56:01.755049 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:56:01.756683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:56:01.757622 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:56:01.763723 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:56:01.780461 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:56:01.780608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:56:01.782085 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:56:01.783350 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:56:01.784672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:56:01.789671 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:56:01.800487 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:56:01.805668 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:56:01.812455 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:56:01.813079 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:56:01.814193 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:56:01.815215 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:56:01.815301 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:56:01.816478 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:56:01.817152 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:56:01.818196 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:56:01.819146 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:56:01.820075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:56:01.821103 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:56:01.822166 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:56:01.823277 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:56:01.824318 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:56:01.825386 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:56:01.826361 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:56:01.826443 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:56:01.827625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:56:01.828297 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:56:01.829232 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:56:01.829319 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:56:01.830404 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:56:01.830497 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:56:01.831833 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:56:01.831920 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:56:01.832660 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:56:01.832741 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:56:01.833536 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:56:01.833656 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:56:01.849722 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:56:01.852709 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:56:01.853256 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:56:01.853405 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:56:01.854963 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:56:01.855078 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:56:01.862111 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:56:01.862180 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:56:01.864280 ignition[1015]: INFO : Ignition 2.19.0 Jul 6 23:56:01.864280 ignition[1015]: INFO : Stage: umount Jul 6 23:56:01.866980 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:56:01.866980 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:56:01.866980 ignition[1015]: INFO : umount: umount passed Jul 6 23:56:01.866980 ignition[1015]: INFO : Ignition finished successfully Jul 6 23:56:01.867897 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:56:01.867989 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:56:01.875509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:56:01.876034 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:56:01.876092 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:56:01.877690 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:56:01.877726 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:56:01.878171 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:56:01.878215 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:56:01.878916 systemd[1]: Stopped target network.target - Network. Jul 6 23:56:01.879809 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:56:01.879846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:56:01.880806 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:56:01.881777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:56:01.886609 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:56:01.887163 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:56:01.888249 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:56:01.889179 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:56:01.889224 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:56:01.890101 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:56:01.890130 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:56:01.891012 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:56:01.891051 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:56:01.891955 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:56:01.891987 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:56:01.893083 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:56:01.894070 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:56:01.895201 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:56:01.895269 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:56:01.896358 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:56:01.896414 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:56:01.898594 systemd-networkd[779]: eth0: DHCPv6 lease lost Jul 6 23:56:01.900968 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:56:01.901041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:56:01.902579 systemd-networkd[779]: eth1: DHCPv6 lease lost Jul 6 23:56:01.903982 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:56:01.904037 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:56:01.904968 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:56:01.905048 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:56:01.906261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:56:01.906297 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:56:01.912811 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:56:01.913276 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:56:01.913315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:56:01.913875 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:56:01.913907 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:56:01.914408 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:56:01.914439 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:56:01.915440 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:56:01.924512 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:56:01.924613 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:56:01.927079 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:56:01.927245 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:56:01.928413 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:56:01.928443 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:56:01.929313 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:56:01.929338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:56:01.930384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:56:01.930417 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:56:01.931867 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:56:01.931899 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:56:01.932989 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:56:01.933021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:56:01.944664 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:56:01.945351 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:56:01.945393 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:56:01.945933 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:56:01.945969 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:56:01.946512 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:56:01.946576 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:56:01.947759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:56:01.947795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:01.952267 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:56:01.952355 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:56:01.954008 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:56:01.960684 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:56:01.966508 systemd[1]: Switching root. Jul 6 23:56:02.001205 systemd-journald[187]: Journal stopped Jul 6 23:56:02.885534 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jul 6 23:56:02.885629 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:56:02.885642 kernel: SELinux: policy capability open_perms=1 Jul 6 23:56:02.885653 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:56:02.885660 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:56:02.885671 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:56:02.885680 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:56:02.885688 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:56:02.885696 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:56:02.885703 kernel: audit: type=1403 audit(1751846162.125:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:56:02.885712 systemd[1]: Successfully loaded SELinux policy in 36.044ms. Jul 6 23:56:02.885725 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.181ms. Jul 6 23:56:02.885734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:56:02.885743 systemd[1]: Detected virtualization kvm. Jul 6 23:56:02.885752 systemd[1]: Detected architecture x86-64. Jul 6 23:56:02.885760 systemd[1]: Detected first boot. Jul 6 23:56:02.885769 systemd[1]: Hostname set to . Jul 6 23:56:02.885777 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:56:02.885785 zram_generator::config[1058]: No configuration found. Jul 6 23:56:02.885794 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:56:02.885802 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:56:02.885810 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:56:02.885819 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:56:02.885829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:56:02.885837 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:56:02.885845 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:56:02.885853 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:56:02.885861 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:56:02.885869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:56:02.885877 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:56:02.885885 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:56:02.885896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:56:02.885904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:56:02.885912 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:56:02.885920 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:56:02.885928 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:56:02.885937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:56:02.885945 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:56:02.885953 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:56:02.885961 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:56:02.885971 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:56:02.885979 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:56:02.885987 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:56:02.885995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:56:02.886003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:56:02.886012 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:56:02.886022 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:56:02.886030 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:56:02.886038 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:56:02.886046 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:56:02.886054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:56:02.886062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:56:02.886070 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:56:02.886077 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:56:02.886086 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:56:02.886095 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:56:02.886104 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:02.886112 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:56:02.886123 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:56:02.886132 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:56:02.886141 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:56:02.886151 systemd[1]: Reached target machines.target - Containers. Jul 6 23:56:02.886159 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:56:02.886167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:02.886187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:56:02.886196 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:56:02.886205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:56:02.886214 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:56:02.886222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:56:02.886231 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:56:02.886239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:56:02.886247 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:56:02.886256 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:56:02.886264 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:56:02.886272 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:56:02.886280 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:56:02.886289 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:56:02.886297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:56:02.886307 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:56:02.886315 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:56:02.886323 kernel: loop: module loaded Jul 6 23:56:02.886332 kernel: fuse: init (API version 7.39) Jul 6 23:56:02.886340 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:56:02.886348 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:56:02.886356 systemd[1]: Stopped verity-setup.service. Jul 6 23:56:02.886364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:02.886373 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:56:02.886383 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:56:02.886407 systemd-journald[1134]: Collecting audit messages is disabled. Jul 6 23:56:02.886428 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:56:02.886439 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:56:02.886447 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:56:02.886456 kernel: ACPI: bus type drm_connector registered Jul 6 23:56:02.886466 systemd-journald[1134]: Journal started Jul 6 23:56:02.886483 systemd-journald[1134]: Runtime Journal (/run/log/journal/da719bce1a1b4defa66fd899211b2fca) is 4.8M, max 38.4M, 33.6M free. Jul 6 23:56:02.601749 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:56:02.623738 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:56:02.624372 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:56:02.889722 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:56:02.890537 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:56:02.891421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:56:02.892622 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:56:02.892745 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:56:02.893848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:56:02.893952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:56:02.894682 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:56:02.894781 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:56:02.895768 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:56:02.895873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:56:02.896769 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:56:02.896874 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:56:02.897765 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:56:02.897866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:56:02.898951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:56:02.899775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:56:02.900922 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:56:02.912431 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:56:02.917892 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:56:02.924705 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:56:02.928612 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:56:02.929651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:56:02.929703 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:56:02.933840 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:56:02.940270 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:56:02.946692 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:56:02.947457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:02.951632 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:56:02.954675 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:56:02.955312 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:56:02.957383 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:56:02.958305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:56:02.965693 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:56:02.969079 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:56:02.979771 systemd-journald[1134]: Time spent on flushing to /var/log/journal/da719bce1a1b4defa66fd899211b2fca is 16.827ms for 1129 entries. Jul 6 23:56:02.979771 systemd-journald[1134]: System Journal (/var/log/journal/da719bce1a1b4defa66fd899211b2fca) is 8.0M, max 584.8M, 576.8M free. Jul 6 23:56:03.017073 systemd-journald[1134]: Received client request to flush runtime journal. Jul 6 23:56:02.977752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:56:02.982412 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:56:02.983263 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:56:02.983968 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:56:02.984864 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:56:02.987913 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:56:02.998814 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:56:03.008321 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:56:03.014064 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:56:03.016631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:56:03.022243 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:56:03.028523 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:56:03.034220 kernel: loop0: detected capacity change from 0 to 8 Jul 6 23:56:03.044567 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:56:03.050497 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:56:03.051283 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jul 6 23:56:03.051294 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jul 6 23:56:03.052879 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:56:03.057888 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:56:03.065734 kernel: loop1: detected capacity change from 0 to 140768 Jul 6 23:56:03.065690 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:56:03.097770 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:56:03.102638 kernel: loop2: detected capacity change from 0 to 229808 Jul 6 23:56:03.105739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:56:03.115110 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 6 23:56:03.115162 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 6 23:56:03.118703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:56:03.145573 kernel: loop3: detected capacity change from 0 to 142488 Jul 6 23:56:03.191584 kernel: loop4: detected capacity change from 0 to 8 Jul 6 23:56:03.194826 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:56:03.221793 kernel: loop6: detected capacity change from 0 to 229808 Jul 6 23:56:03.245580 kernel: loop7: detected capacity change from 0 to 142488 Jul 6 23:56:03.262459 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 6 23:56:03.262856 (sd-merge)[1207]: Merged extensions into '/usr'. Jul 6 23:56:03.269299 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:56:03.269316 systemd[1]: Reloading... Jul 6 23:56:03.343567 zram_generator::config[1232]: No configuration found. Jul 6 23:56:03.449576 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:56:03.456587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:03.498813 systemd[1]: Reloading finished in 229 ms. Jul 6 23:56:03.525619 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:56:03.526335 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:56:03.535992 systemd[1]: Starting ensure-sysext.service... Jul 6 23:56:03.538683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:56:03.547614 systemd[1]: Reloading requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:56:03.547624 systemd[1]: Reloading... Jul 6 23:56:03.567991 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:56:03.569108 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:56:03.571063 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:56:03.571337 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jul 6 23:56:03.571437 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jul 6 23:56:03.573986 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:56:03.574412 systemd-tmpfiles[1277]: Skipping /boot Jul 6 23:56:03.581196 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:56:03.581260 systemd-tmpfiles[1277]: Skipping /boot Jul 6 23:56:03.620229 zram_generator::config[1303]: No configuration found. Jul 6 23:56:03.701321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:03.738975 systemd[1]: Reloading finished in 191 ms. Jul 6 23:56:03.752269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:56:03.758911 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:56:03.763281 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:56:03.770760 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:56:03.773658 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:56:03.778742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:56:03.780728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:56:03.782685 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:56:03.789359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.789531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:03.797181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:56:03.800710 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:56:03.802268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:56:03.803870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:03.803956 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.810755 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:56:03.814836 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:56:03.816105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.816287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:03.816419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:03.825983 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:56:03.827089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.830783 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 6 23:56:03.832869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:56:03.832977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:56:03.835448 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:56:03.837851 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:56:03.846633 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.846802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:03.854732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:56:03.857700 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:56:03.858289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:03.858403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:03.859114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:56:03.859244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:56:03.861249 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:56:03.861367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:56:03.863050 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:56:03.864409 systemd[1]: Finished ensure-sysext.service. Jul 6 23:56:03.875330 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:56:03.876927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:56:03.877705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:56:03.877813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:56:03.880097 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:56:03.890623 augenrules[1399]: No rules Jul 6 23:56:03.889700 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:56:03.890237 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:56:03.890461 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:56:03.890868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:56:03.892978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:56:03.908777 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:56:03.909976 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:56:03.967024 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:56:03.986060 systemd-networkd[1398]: lo: Link UP Jul 6 23:56:03.986345 systemd-networkd[1398]: lo: Gained carrier Jul 6 23:56:03.989379 systemd-networkd[1398]: Enumeration completed Jul 6 23:56:03.989508 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:56:03.994723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:56:03.998034 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:03.998038 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:56:03.998803 systemd-networkd[1398]: eth0: Link UP Jul 6 23:56:03.998807 systemd-networkd[1398]: eth0: Gained carrier Jul 6 23:56:03.998817 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.006995 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:56:04.007580 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:56:04.009040 systemd-resolved[1353]: Positive Trust Anchors: Jul 6 23:56:04.009058 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:56:04.009083 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:56:04.014419 systemd-resolved[1353]: Using system hostname 'ci-4081-3-4-2-e8b158d58b'. Jul 6 23:56:04.019378 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:56:04.020649 systemd[1]: Reached target network.target - Network. Jul 6 23:56:04.021065 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:56:04.039691 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.040177 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:56:04.040743 systemd-networkd[1398]: eth1: Link UP Jul 6 23:56:04.041405 systemd-networkd[1398]: eth1: Gained carrier Jul 6 23:56:04.041460 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:04.053092 systemd-networkd[1398]: eth0: DHCPv4 address 157.180.92.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:56:04.053565 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 6 23:56:04.054941 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Jul 6 23:56:04.074560 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:56:04.077602 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:56:04.081570 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:56:04.082930 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 6 23:56:04.082988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.083064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:56:04.089707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:56:04.093753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:56:04.097121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:56:04.098658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:56:04.098693 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:56:04.098706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:56:04.098969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:56:04.099083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:56:04.102977 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:56:04.103793 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:56:04.104884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:56:04.105363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:56:04.108396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:56:04.108720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:56:04.118042 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1392) Jul 6 23:56:04.122558 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:56:04.126234 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:56:04.126428 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:56:04.134609 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jul 6 23:56:04.141563 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:56:04.158662 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jul 6 23:56:04.165779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:56:04.176601 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jul 6 23:56:04.176748 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:56:04.186095 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:56:04.187801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:56:04.188120 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:56:04.188140 kernel: [drm] features: -context_init Jul 6 23:56:04.193565 kernel: [drm] number of scanouts: 1 Jul 6 23:56:04.193605 kernel: [drm] number of cap sets: 0 Jul 6 23:56:04.194565 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 6 23:56:04.202755 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:56:04.207551 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 6 23:56:04.212575 kernel: Console: switching to colour frame buffer device 160x50 Jul 6 23:56:04.219571 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:56:04.221712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:56:04.221942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:04.232839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:56:04.276464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:05.208673 systemd-resolved[1353]: Clock change detected. Flushing caches. Jul 6 23:56:05.208830 systemd-timesyncd[1380]: Contacted time server 129.70.132.36:123 (0.flatcar.pool.ntp.org). Jul 6 23:56:05.208886 systemd-timesyncd[1380]: Initial clock synchronization to Sun 2025-07-06 23:56:05.208593 UTC. Jul 6 23:56:05.211338 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:56:05.216577 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:56:05.227888 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:56:05.255403 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:56:05.256180 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:56:05.256280 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:56:05.256681 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:56:05.257636 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:56:05.257942 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:56:05.258090 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:56:05.258156 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:56:05.258243 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:56:05.258271 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:56:05.258339 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:56:05.260825 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:56:05.262077 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:56:05.270097 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:56:05.271263 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:56:05.271756 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:56:05.271878 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:56:05.271932 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:56:05.272013 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:56:05.272053 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:56:05.274532 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:56:05.282624 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:56:05.283603 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:56:05.290603 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:56:05.294565 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:56:05.305697 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:56:05.307341 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:56:05.311221 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:56:05.318936 jq[1469]: false Jul 6 23:56:05.320535 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:56:05.325536 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 6 23:56:05.330546 extend-filesystems[1470]: Found loop4 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found loop5 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found loop6 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found loop7 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda1 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda2 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda3 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found usr Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda4 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda6 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda7 Jul 6 23:56:05.333930 extend-filesystems[1470]: Found sda9 Jul 6 23:56:05.333930 extend-filesystems[1470]: Checking size of /dev/sda9 Jul 6 23:56:05.335998 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:56:05.349927 dbus-daemon[1466]: [system] SELinux support is enabled Jul 6 23:56:05.360212 coreos-metadata[1465]: Jul 06 23:56:05.337 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 6 23:56:05.360212 coreos-metadata[1465]: Jul 06 23:56:05.339 INFO Fetch successful Jul 6 23:56:05.360212 coreos-metadata[1465]: Jul 06 23:56:05.339 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 6 23:56:05.360212 coreos-metadata[1465]: Jul 06 23:56:05.342 INFO Fetch successful Jul 6 23:56:05.349545 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:56:05.368860 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:56:05.371243 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:56:05.373002 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:56:05.375695 extend-filesystems[1470]: Resized partition /dev/sda9 Jul 6 23:56:05.379719 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:56:05.390329 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:56:05.390586 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:56:05.391796 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:56:05.395352 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:56:05.406517 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:56:05.406678 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:56:05.406946 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:56:05.407069 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:56:05.418483 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 6 23:56:05.411661 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:56:05.411784 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:56:05.421926 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:56:05.427310 jq[1495]: true Jul 6 23:56:05.421953 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:56:05.424614 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:56:05.424627 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:56:05.435088 update_engine[1492]: I20250706 23:56:05.429170 1492 main.cc:92] Flatcar Update Engine starting Jul 6 23:56:05.440945 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:56:05.442194 update_engine[1492]: I20250706 23:56:05.441016 1492 update_check_scheduler.cc:74] Next update check in 11m10s Jul 6 23:56:05.456747 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1406) Jul 6 23:56:05.448588 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:56:05.461519 jq[1507]: true Jul 6 23:56:05.464360 systemd-logind[1486]: New seat seat0. Jul 6 23:56:05.469012 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Jul 6 23:56:05.469048 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:56:05.469592 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:56:05.471653 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:56:05.478162 tar[1499]: linux-amd64/LICENSE Jul 6 23:56:05.483513 tar[1499]: linux-amd64/helm Jul 6 23:56:05.560814 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:56:05.564165 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:56:05.566873 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:56:05.570135 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:56:05.581218 systemd[1]: Starting sshkeys.service... Jul 6 23:56:05.591509 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 6 23:56:05.606913 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 6 23:56:05.606913 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 6 23:56:05.606913 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 6 23:56:05.610043 extend-filesystems[1470]: Resized filesystem in /dev/sda9 Jul 6 23:56:05.610043 extend-filesystems[1470]: Found sr0 Jul 6 23:56:05.609279 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:56:05.611112 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:56:05.622797 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:56:05.634604 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:56:05.696997 coreos-metadata[1541]: Jul 06 23:56:05.694 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 6 23:56:05.696997 coreos-metadata[1541]: Jul 06 23:56:05.696 INFO Fetch successful Jul 6 23:56:05.701757 unknown[1541]: wrote ssh authorized keys file for user: core Jul 6 23:56:05.739202 update-ssh-keys[1549]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:56:05.740058 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:56:05.745557 systemd[1]: Finished sshkeys.service. Jul 6 23:56:05.792697 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:56:05.808698 containerd[1511]: time="2025-07-06T23:56:05.808618979Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:56:05.857505 containerd[1511]: time="2025-07-06T23:56:05.857442909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859101 containerd[1511]: time="2025-07-06T23:56:05.858885555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859101 containerd[1511]: time="2025-07-06T23:56:05.858922985Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:56:05.859101 containerd[1511]: time="2025-07-06T23:56:05.858937523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:56:05.859101 containerd[1511]: time="2025-07-06T23:56:05.859088515Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:56:05.859202 containerd[1511]: time="2025-07-06T23:56:05.859107882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859202 containerd[1511]: time="2025-07-06T23:56:05.859160040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859202 containerd[1511]: time="2025-07-06T23:56:05.859170640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859532 containerd[1511]: time="2025-07-06T23:56:05.859323046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859532 containerd[1511]: time="2025-07-06T23:56:05.859341430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859532 containerd[1511]: time="2025-07-06T23:56:05.859352330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859532 containerd[1511]: time="2025-07-06T23:56:05.859360305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859532 containerd[1511]: time="2025-07-06T23:56:05.859436899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859618 containerd[1511]: time="2025-07-06T23:56:05.859605365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859705 containerd[1511]: time="2025-07-06T23:56:05.859681879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:05.859705 containerd[1511]: time="2025-07-06T23:56:05.859701665Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:56:05.859781 containerd[1511]: time="2025-07-06T23:56:05.859761027Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:56:05.859923 containerd[1511]: time="2025-07-06T23:56:05.859807524Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:56:05.864745 containerd[1511]: time="2025-07-06T23:56:05.864714818Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:56:05.864783 containerd[1511]: time="2025-07-06T23:56:05.864758269Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:56:05.864783 containerd[1511]: time="2025-07-06T23:56:05.864775501Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:56:05.864812 containerd[1511]: time="2025-07-06T23:56:05.864787985Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:56:05.864812 containerd[1511]: time="2025-07-06T23:56:05.864799466Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.864900476Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865085383Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865167537Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865181703Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865192323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865202733Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865213042Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865222430Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865238460Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865252817Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865262866Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865273385Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865283074Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:56:05.865362 containerd[1511]: time="2025-07-06T23:56:05.865302099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865312319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865321696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865330793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865340532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865350350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865360970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865371980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865382630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865394823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865430149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865442533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865454034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865466628Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865483209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865599 containerd[1511]: time="2025-07-06T23:56:05.865491554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865503867Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865537150Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865549604Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865558380Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865566585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865574460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865583977Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865591683Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:56:05.865803 containerd[1511]: time="2025-07-06T23:56:05.865599297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:56:05.865925 containerd[1511]: time="2025-07-06T23:56:05.865817586Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:56:05.865925 containerd[1511]: time="2025-07-06T23:56:05.865866948Z" level=info msg="Connect containerd service" Jul 6 23:56:05.865925 containerd[1511]: time="2025-07-06T23:56:05.865899550Z" level=info msg="using legacy CRI server" Jul 6 23:56:05.865925 containerd[1511]: time="2025-07-06T23:56:05.865905361Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:56:05.866124 containerd[1511]: time="2025-07-06T23:56:05.865977907Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:56:05.870152 containerd[1511]: time="2025-07-06T23:56:05.869600611Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:56:05.870152 containerd[1511]: time="2025-07-06T23:56:05.869716518Z" level=info msg="Start subscribing containerd event" Jul 6 23:56:05.870152 containerd[1511]: time="2025-07-06T23:56:05.869756323Z" level=info msg="Start recovering state" Jul 6 23:56:05.870152 containerd[1511]: time="2025-07-06T23:56:05.869802510Z" level=info msg="Start event monitor" Jul 6 23:56:05.870152 containerd[1511]: time="2025-07-06T23:56:05.869816807Z" level=info msg="Start snapshots syncer" Jul 6 23:56:05.870152 containerd[1511]: time="2025-07-06T23:56:05.869823769Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:56:05.870152 containerd[1511]: time="2025-07-06T23:56:05.869829911Z" level=info msg="Start streaming server" Jul 6 23:56:05.870286 containerd[1511]: time="2025-07-06T23:56:05.870186239Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:56:05.870286 containerd[1511]: time="2025-07-06T23:56:05.870241373Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:56:05.886265 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:56:05.891132 containerd[1511]: time="2025-07-06T23:56:05.890697699Z" level=info msg="containerd successfully booted in 0.083888s" Jul 6 23:56:06.069330 tar[1499]: linux-amd64/README.md Jul 6 23:56:06.079592 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:56:06.173513 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:56:06.190641 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:56:06.196647 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:56:06.202576 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:56:06.202714 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:56:06.208683 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:56:06.219340 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:56:06.224702 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:56:06.226635 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:56:06.228680 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:56:06.452702 systemd-networkd[1398]: eth0: Gained IPv6LL Jul 6 23:56:06.457455 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:56:06.460774 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:56:06.472654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:06.480827 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:56:06.509683 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:56:06.773068 systemd-networkd[1398]: eth1: Gained IPv6LL Jul 6 23:56:07.283777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:07.284943 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:56:07.287724 systemd[1]: Startup finished in 1.132s (kernel) + 7.457s (initrd) + 4.331s (userspace) = 12.921s. Jul 6 23:56:07.298952 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:07.822357 kubelet[1597]: E0706 23:56:07.822301 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:07.824702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:07.824824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:18.075513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:56:18.080799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:18.170366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:18.173275 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:18.212057 kubelet[1616]: E0706 23:56:18.212010 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:18.215331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:18.215507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:28.465974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:56:28.471806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:28.556344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:28.558971 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:28.590124 kubelet[1630]: E0706 23:56:28.590065 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:28.592576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:28.592692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:38.843198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:56:38.848640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:38.943805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:38.946817 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:38.981254 kubelet[1645]: E0706 23:56:38.981158 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:38.982537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:38.982675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:49.097624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:56:49.105628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:49.197053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:49.200093 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:49.229187 kubelet[1660]: E0706 23:56:49.229094 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:49.230487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:49.230697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:50.405226 update_engine[1492]: I20250706 23:56:50.405128 1492 update_attempter.cc:509] Updating boot flags... Jul 6 23:56:50.439438 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1676) Jul 6 23:56:50.482758 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1677) Jul 6 23:56:59.347376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 6 23:56:59.352579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:59.464636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:59.465019 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:59.511837 kubelet[1693]: E0706 23:56:59.511763 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:59.515009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:59.515169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:09.597497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 6 23:57:09.602582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:09.690202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:09.693959 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:09.729667 kubelet[1708]: E0706 23:57:09.729391 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:09.732246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:09.732434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:19.847684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 6 23:57:19.852604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:19.945795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:19.948819 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:19.980878 kubelet[1724]: E0706 23:57:19.980817 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:19.983326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:19.983498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:30.097896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 6 23:57:30.104684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:30.204119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:30.215720 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:30.251093 kubelet[1739]: E0706 23:57:30.251043 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:30.253757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:30.253964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:40.347402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 6 23:57:40.352897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:40.437974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:40.440831 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:40.471867 kubelet[1755]: E0706 23:57:40.471750 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:40.473835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:40.474008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:50.597524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jul 6 23:57:50.602807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:50.692080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:50.694794 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:50.721441 kubelet[1770]: E0706 23:57:50.721374 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:50.723607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:50.723725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:54.620316 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:57:54.621314 systemd[1]: Started sshd@0-157.180.92.196:22-147.75.109.163:59238.service - OpenSSH per-connection server daemon (147.75.109.163:59238). Jul 6 23:57:55.627561 sshd[1778]: Accepted publickey for core from 147.75.109.163 port 59238 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:57:55.629374 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:55.637108 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:57:55.648607 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:57:55.651604 systemd-logind[1486]: New session 1 of user core. Jul 6 23:57:55.658379 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:57:55.663641 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:57:55.667039 (systemd)[1782]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:57:55.756368 systemd[1782]: Queued start job for default target default.target. Jul 6 23:57:55.766209 systemd[1782]: Created slice app.slice - User Application Slice. Jul 6 23:57:55.766231 systemd[1782]: Reached target paths.target - Paths. Jul 6 23:57:55.766242 systemd[1782]: Reached target timers.target - Timers. Jul 6 23:57:55.767192 systemd[1782]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:57:55.776980 systemd[1782]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:57:55.777024 systemd[1782]: Reached target sockets.target - Sockets. Jul 6 23:57:55.777035 systemd[1782]: Reached target basic.target - Basic System. Jul 6 23:57:55.777179 systemd[1782]: Reached target default.target - Main User Target. Jul 6 23:57:55.777203 systemd[1782]: Startup finished in 105ms. Jul 6 23:57:55.777260 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:57:55.778285 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:57:56.499651 systemd[1]: Started sshd@1-157.180.92.196:22-147.75.109.163:51150.service - OpenSSH per-connection server daemon (147.75.109.163:51150). Jul 6 23:57:57.496965 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 51150 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:57:57.498485 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:57.503113 systemd-logind[1486]: New session 2 of user core. Jul 6 23:57:57.508595 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:57:58.190716 sshd[1793]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:58.194311 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:57:58.194763 systemd[1]: sshd@1-157.180.92.196:22-147.75.109.163:51150.service: Deactivated successfully. Jul 6 23:57:58.196439 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:57:58.197331 systemd-logind[1486]: Removed session 2. Jul 6 23:57:58.358982 systemd[1]: Started sshd@2-157.180.92.196:22-147.75.109.163:51158.service - OpenSSH per-connection server daemon (147.75.109.163:51158). Jul 6 23:57:59.356825 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 51158 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:57:59.358154 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:59.362588 systemd-logind[1486]: New session 3 of user core. Jul 6 23:57:59.368568 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:58:00.045003 sshd[1800]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:00.047893 systemd[1]: sshd@2-157.180.92.196:22-147.75.109.163:51158.service: Deactivated successfully. Jul 6 23:58:00.049272 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:58:00.050287 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:58:00.051239 systemd-logind[1486]: Removed session 3. Jul 6 23:58:00.221015 systemd[1]: Started sshd@3-157.180.92.196:22-147.75.109.163:51168.service - OpenSSH per-connection server daemon (147.75.109.163:51168). Jul 6 23:58:00.847461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jul 6 23:58:00.852984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:00.943376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:00.946596 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:58:00.976937 kubelet[1817]: E0706 23:58:00.976871 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:58:00.979049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:58:00.979217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:58:01.235663 sshd[1807]: Accepted publickey for core from 147.75.109.163 port 51168 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:58:01.236706 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:01.241098 systemd-logind[1486]: New session 4 of user core. Jul 6 23:58:01.251583 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:58:01.939534 sshd[1807]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:01.943880 systemd[1]: sshd@3-157.180.92.196:22-147.75.109.163:51168.service: Deactivated successfully. Jul 6 23:58:01.946306 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:58:01.947173 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:58:01.948402 systemd-logind[1486]: Removed session 4. Jul 6 23:58:02.109948 systemd[1]: Started sshd@4-157.180.92.196:22-147.75.109.163:51184.service - OpenSSH per-connection server daemon (147.75.109.163:51184). Jul 6 23:58:03.091052 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 51184 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:58:03.092300 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:03.096539 systemd-logind[1486]: New session 5 of user core. Jul 6 23:58:03.102597 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:58:03.624314 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:58:03.624737 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:03.645542 sudo[1832]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:03.806305 sshd[1829]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:03.809945 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:58:03.810598 systemd[1]: sshd@4-157.180.92.196:22-147.75.109.163:51184.service: Deactivated successfully. Jul 6 23:58:03.812277 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:58:03.813187 systemd-logind[1486]: Removed session 5. Jul 6 23:58:03.974203 systemd[1]: Started sshd@5-157.180.92.196:22-147.75.109.163:51190.service - OpenSSH per-connection server daemon (147.75.109.163:51190). Jul 6 23:58:04.957111 sshd[1837]: Accepted publickey for core from 147.75.109.163 port 51190 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:58:04.958791 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:04.963682 systemd-logind[1486]: New session 6 of user core. Jul 6 23:58:04.976587 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:58:05.482529 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:58:05.482863 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:05.486094 sudo[1841]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:05.490741 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:58:05.491006 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:05.504683 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:58:05.505916 auditctl[1844]: No rules Jul 6 23:58:05.506518 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:58:05.506689 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:58:05.509030 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:58:05.534051 augenrules[1862]: No rules Jul 6 23:58:05.534675 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:58:05.535895 sudo[1840]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:05.696135 sshd[1837]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:05.699196 systemd[1]: sshd@5-157.180.92.196:22-147.75.109.163:51190.service: Deactivated successfully. Jul 6 23:58:05.701045 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:58:05.702201 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:58:05.703255 systemd-logind[1486]: Removed session 6. Jul 6 23:58:05.870889 systemd[1]: Started sshd@6-157.180.92.196:22-147.75.109.163:51206.service - OpenSSH per-connection server daemon (147.75.109.163:51206). Jul 6 23:58:06.869177 sshd[1870]: Accepted publickey for core from 147.75.109.163 port 51206 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 6 23:58:06.870679 sshd[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:06.875782 systemd-logind[1486]: New session 7 of user core. Jul 6 23:58:06.881589 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:58:07.400404 sudo[1873]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:58:07.400702 sudo[1873]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:58:07.646724 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:58:07.648913 (dockerd)[1889]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:58:07.867235 dockerd[1889]: time="2025-07-06T23:58:07.867186525Z" level=info msg="Starting up" Jul 6 23:58:07.960257 dockerd[1889]: time="2025-07-06T23:58:07.959822815Z" level=info msg="Loading containers: start." Jul 6 23:58:08.037450 kernel: Initializing XFRM netlink socket Jul 6 23:58:08.099809 systemd-networkd[1398]: docker0: Link UP Jul 6 23:58:08.110468 dockerd[1889]: time="2025-07-06T23:58:08.110403339Z" level=info msg="Loading containers: done." Jul 6 23:58:08.126650 dockerd[1889]: time="2025-07-06T23:58:08.126334758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:58:08.126650 dockerd[1889]: time="2025-07-06T23:58:08.126444516Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:58:08.126650 dockerd[1889]: time="2025-07-06T23:58:08.126518465Z" level=info msg="Daemon has completed initialization" Jul 6 23:58:08.149482 dockerd[1889]: time="2025-07-06T23:58:08.149427964Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:58:08.149556 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:58:08.881504 containerd[1511]: time="2025-07-06T23:58:08.881459543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:58:09.447972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458432069.mount: Deactivated successfully. Jul 6 23:58:10.494640 containerd[1511]: time="2025-07-06T23:58:10.494590099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:10.495814 containerd[1511]: time="2025-07-06T23:58:10.495776640Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079193" Jul 6 23:58:10.496606 containerd[1511]: time="2025-07-06T23:58:10.496571250Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:10.501395 containerd[1511]: time="2025-07-06T23:58:10.500965486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:10.502300 containerd[1511]: time="2025-07-06T23:58:10.501954613Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.620448013s" Jul 6 23:58:10.502300 containerd[1511]: time="2025-07-06T23:58:10.502014146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 6 23:58:10.502932 containerd[1511]: time="2025-07-06T23:58:10.502903365Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:58:11.097521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jul 6 23:58:11.103633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:11.198995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:11.203223 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:58:11.239601 kubelet[2088]: E0706 23:58:11.239526 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:58:11.242751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:58:11.243048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:58:11.666333 containerd[1511]: time="2025-07-06T23:58:11.666270658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:11.667113 containerd[1511]: time="2025-07-06T23:58:11.667071370Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018968" Jul 6 23:58:11.667677 containerd[1511]: time="2025-07-06T23:58:11.667640314Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:11.670533 containerd[1511]: time="2025-07-06T23:58:11.669617566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:11.670533 containerd[1511]: time="2025-07-06T23:58:11.670392408Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.167397421s" Jul 6 23:58:11.670533 containerd[1511]: time="2025-07-06T23:58:11.670434148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 6 23:58:11.671071 containerd[1511]: time="2025-07-06T23:58:11.671050832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:58:12.639950 containerd[1511]: time="2025-07-06T23:58:12.639880397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:12.640663 containerd[1511]: time="2025-07-06T23:58:12.640632507Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155077" Jul 6 23:58:12.641426 containerd[1511]: time="2025-07-06T23:58:12.641373395Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:12.643565 containerd[1511]: time="2025-07-06T23:58:12.643533092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:12.644736 containerd[1511]: time="2025-07-06T23:58:12.644234465Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 973.16047ms" Jul 6 23:58:12.644736 containerd[1511]: time="2025-07-06T23:58:12.644259983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 6 23:58:12.644736 containerd[1511]: time="2025-07-06T23:58:12.644600044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:58:13.561684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542278676.mount: Deactivated successfully. Jul 6 23:58:13.865446 containerd[1511]: time="2025-07-06T23:58:13.865378357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:13.866280 containerd[1511]: time="2025-07-06T23:58:13.866248588Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892774" Jul 6 23:58:13.867228 containerd[1511]: time="2025-07-06T23:58:13.867191348Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:13.868755 containerd[1511]: time="2025-07-06T23:58:13.868721105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:13.869260 containerd[1511]: time="2025-07-06T23:58:13.869120789Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.224501107s" Jul 6 23:58:13.869260 containerd[1511]: time="2025-07-06T23:58:13.869146568Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 6 23:58:13.869735 containerd[1511]: time="2025-07-06T23:58:13.869712164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:58:14.351098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644413809.mount: Deactivated successfully. Jul 6 23:58:15.102981 containerd[1511]: time="2025-07-06T23:58:15.102924042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.103989 containerd[1511]: time="2025-07-06T23:58:15.103946252Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Jul 6 23:58:15.104656 containerd[1511]: time="2025-07-06T23:58:15.104616646Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.107132 containerd[1511]: time="2025-07-06T23:58:15.107099511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.108145 containerd[1511]: time="2025-07-06T23:58:15.107996823Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.238260302s" Jul 6 23:58:15.108145 containerd[1511]: time="2025-07-06T23:58:15.108025317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 6 23:58:15.108394 containerd[1511]: time="2025-07-06T23:58:15.108380808Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:58:15.528941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889181857.mount: Deactivated successfully. Jul 6 23:58:15.534944 containerd[1511]: time="2025-07-06T23:58:15.534911018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.535785 containerd[1511]: time="2025-07-06T23:58:15.535736455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jul 6 23:58:15.536514 containerd[1511]: time="2025-07-06T23:58:15.536467806Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.538518 containerd[1511]: time="2025-07-06T23:58:15.538495130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:15.539695 containerd[1511]: time="2025-07-06T23:58:15.539249403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 430.283962ms" Jul 6 23:58:15.539695 containerd[1511]: time="2025-07-06T23:58:15.539280162Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:58:15.539852 containerd[1511]: time="2025-07-06T23:58:15.539819068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:58:15.972686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722863101.mount: Deactivated successfully. Jul 6 23:58:18.295520 containerd[1511]: time="2025-07-06T23:58:18.295471565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:18.296689 containerd[1511]: time="2025-07-06T23:58:18.296647703Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247215" Jul 6 23:58:18.297435 containerd[1511]: time="2025-07-06T23:58:18.297113762Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:18.300185 containerd[1511]: time="2025-07-06T23:58:18.300150167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:18.301534 containerd[1511]: time="2025-07-06T23:58:18.301378875Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.761526845s" Jul 6 23:58:18.301534 containerd[1511]: time="2025-07-06T23:58:18.301433138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 6 23:58:20.913564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:20.918708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:20.944948 systemd[1]: Reloading requested from client PID 2251 ('systemctl') (unit session-7.scope)... Jul 6 23:58:20.945067 systemd[1]: Reloading... Jul 6 23:58:21.037434 zram_generator::config[2289]: No configuration found. Jul 6 23:58:21.121367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:58:21.179535 systemd[1]: Reloading finished in 234 ms. Jul 6 23:58:21.216840 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:58:21.216913 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:58:21.217104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:21.224813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:21.312008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:21.314730 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:58:21.350777 kubelet[2344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:58:21.350777 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:58:21.350777 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:58:21.351118 kubelet[2344]: I0706 23:58:21.350813 2344 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:58:21.847296 kubelet[2344]: I0706 23:58:21.847252 2344 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:58:21.847296 kubelet[2344]: I0706 23:58:21.847282 2344 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:58:21.848582 kubelet[2344]: I0706 23:58:21.848565 2344 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:58:21.876622 kubelet[2344]: I0706 23:58:21.876596 2344 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:58:21.878945 kubelet[2344]: E0706 23:58:21.878869 2344 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://157.180.92.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:58:21.890958 kubelet[2344]: E0706 23:58:21.890852 2344 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:58:21.890958 kubelet[2344]: I0706 23:58:21.890942 2344 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:58:21.897676 kubelet[2344]: I0706 23:58:21.897645 2344 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:58:21.901235 kubelet[2344]: I0706 23:58:21.901186 2344 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:58:21.904391 kubelet[2344]: I0706 23:58:21.901231 2344 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-2-e8b158d58b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:58:21.904391 kubelet[2344]: I0706 23:58:21.904381 2344 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:58:21.904542 kubelet[2344]: I0706 23:58:21.904396 2344 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:58:21.905322 kubelet[2344]: I0706 23:58:21.905286 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:58:21.910912 kubelet[2344]: I0706 23:58:21.910490 2344 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:58:21.910912 kubelet[2344]: I0706 23:58:21.910544 2344 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:58:21.910912 kubelet[2344]: I0706 23:58:21.910577 2344 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:58:21.910912 kubelet[2344]: I0706 23:58:21.910596 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:58:21.921115 kubelet[2344]: E0706 23:58:21.921064 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.92.196:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:58:21.921239 kubelet[2344]: E0706 23:58:21.921190 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.180.92.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-2-e8b158d58b&limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:58:21.922145 kubelet[2344]: I0706 23:58:21.922082 2344 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:58:21.922758 kubelet[2344]: I0706 23:58:21.922675 2344 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:58:21.923656 kubelet[2344]: W0706 23:58:21.923626 2344 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:58:21.930147 kubelet[2344]: I0706 23:58:21.930112 2344 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:58:21.930361 kubelet[2344]: I0706 23:58:21.930350 2344 server.go:1289] "Started kubelet" Jul 6 23:58:21.933362 kubelet[2344]: I0706 23:58:21.933305 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:58:21.935331 kubelet[2344]: I0706 23:58:21.935290 2344 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:58:21.938260 kubelet[2344]: I0706 23:58:21.938162 2344 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:58:21.941431 kubelet[2344]: I0706 23:58:21.941006 2344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:58:21.942367 kubelet[2344]: I0706 23:58:21.942355 2344 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:58:21.942547 kubelet[2344]: E0706 23:58:21.942530 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" Jul 6 23:58:21.944807 kubelet[2344]: I0706 23:58:21.942866 2344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:58:21.945075 kubelet[2344]: I0706 23:58:21.945060 2344 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:58:21.946739 kubelet[2344]: I0706 23:58:21.944382 2344 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:58:21.946818 kubelet[2344]: I0706 23:58:21.944308 2344 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:58:21.949451 kubelet[2344]: I0706 23:58:21.948969 2344 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:58:21.949451 kubelet[2344]: I0706 23:58:21.949333 2344 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:58:21.949661 kubelet[2344]: E0706 23:58:21.949070 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.180.92.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:58:21.950118 kubelet[2344]: E0706 23:58:21.946588 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.92.196:6443/api/v1/namespaces/default/events\": dial tcp 157.180.92.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-2-e8b158d58b.184fceee0fa23611 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-2-e8b158d58b,UID:ci-4081-3-4-2-e8b158d58b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-2-e8b158d58b,},FirstTimestamp:2025-07-06 23:58:21.930239505 +0000 UTC m=+0.612561103,LastTimestamp:2025-07-06 23:58:21.930239505 +0000 UTC m=+0.612561103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-2-e8b158d58b,}" Jul 6 23:58:21.950118 kubelet[2344]: E0706 23:58:21.949132 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.92.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-2-e8b158d58b?timeout=10s\": dial tcp 157.180.92.196:6443: connect: connection refused" interval="200ms" Jul 6 23:58:21.954459 kubelet[2344]: I0706 23:58:21.954358 2344 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:58:21.955241 kubelet[2344]: E0706 23:58:21.955227 2344 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:58:21.955585 kubelet[2344]: I0706 23:58:21.955567 2344 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:58:21.971183 kubelet[2344]: I0706 23:58:21.971170 2344 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:58:21.971251 kubelet[2344]: I0706 23:58:21.971243 2344 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:58:21.971553 kubelet[2344]: I0706 23:58:21.971361 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:58:21.973080 kubelet[2344]: I0706 23:58:21.973066 2344 policy_none.go:49] "None policy: Start" Jul 6 23:58:21.973159 kubelet[2344]: I0706 23:58:21.973148 2344 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:58:21.973226 kubelet[2344]: I0706 23:58:21.973215 2344 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:58:21.979385 kubelet[2344]: I0706 23:58:21.978855 2344 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:58:21.979385 kubelet[2344]: I0706 23:58:21.978878 2344 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:58:21.979385 kubelet[2344]: I0706 23:58:21.978908 2344 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:58:21.979385 kubelet[2344]: I0706 23:58:21.978917 2344 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:58:21.979385 kubelet[2344]: E0706 23:58:21.978949 2344 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:58:21.979385 kubelet[2344]: E0706 23:58:21.979326 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.92.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:58:21.984159 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:58:21.998405 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:58:22.008071 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:58:22.009158 kubelet[2344]: E0706 23:58:22.009129 2344 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:58:22.009281 kubelet[2344]: I0706 23:58:22.009262 2344 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:58:22.009308 kubelet[2344]: I0706 23:58:22.009275 2344 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:58:22.009915 kubelet[2344]: I0706 23:58:22.009811 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:58:22.010602 kubelet[2344]: E0706 23:58:22.010574 2344 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:58:22.010643 kubelet[2344]: E0706 23:58:22.010626 2344 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-2-e8b158d58b\" not found" Jul 6 23:58:22.100655 systemd[1]: Created slice kubepods-burstable-pod629dcf05c4e3e02db23291e7e5156339.slice - libcontainer container kubepods-burstable-pod629dcf05c4e3e02db23291e7e5156339.slice. Jul 6 23:58:22.108232 kubelet[2344]: E0706 23:58:22.108181 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.111465 kubelet[2344]: I0706 23:58:22.111292 2344 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.113003 systemd[1]: Created slice kubepods-burstable-pod639a0feb47aa05e7f913847502174496.slice - libcontainer container kubepods-burstable-pod639a0feb47aa05e7f913847502174496.slice. Jul 6 23:58:22.113328 kubelet[2344]: E0706 23:58:22.113282 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.92.196:6443/api/v1/nodes\": dial tcp 157.180.92.196:6443: connect: connection refused" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.115120 kubelet[2344]: E0706 23:58:22.115105 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.125380 systemd[1]: Created slice kubepods-burstable-pod0b2fec5f19c01dba89a5ec7821035218.slice - libcontainer container kubepods-burstable-pod0b2fec5f19c01dba89a5ec7821035218.slice. Jul 6 23:58:22.126525 kubelet[2344]: E0706 23:58:22.126505 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.150986 kubelet[2344]: E0706 23:58:22.150926 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.92.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-2-e8b158d58b?timeout=10s\": dial tcp 157.180.92.196:6443: connect: connection refused" interval="400ms" Jul 6 23:58:22.248792 kubelet[2344]: I0706 23:58:22.248710 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/629dcf05c4e3e02db23291e7e5156339-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" (UID: \"629dcf05c4e3e02db23291e7e5156339\") " pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.248792 kubelet[2344]: I0706 23:58:22.248770 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.249360 kubelet[2344]: I0706 23:58:22.248823 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.249360 kubelet[2344]: I0706 23:58:22.248850 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.249360 kubelet[2344]: I0706 23:58:22.248877 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b2fec5f19c01dba89a5ec7821035218-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-2-e8b158d58b\" (UID: \"0b2fec5f19c01dba89a5ec7821035218\") " pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.249360 kubelet[2344]: I0706 23:58:22.248992 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/629dcf05c4e3e02db23291e7e5156339-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" (UID: \"629dcf05c4e3e02db23291e7e5156339\") " pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.249360 kubelet[2344]: I0706 23:58:22.249032 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/629dcf05c4e3e02db23291e7e5156339-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" (UID: \"629dcf05c4e3e02db23291e7e5156339\") " pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.249618 kubelet[2344]: I0706 23:58:22.249061 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.249618 kubelet[2344]: I0706 23:58:22.249131 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.315530 kubelet[2344]: I0706 23:58:22.315497 2344 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.315898 kubelet[2344]: E0706 23:58:22.315837 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.92.196:6443/api/v1/nodes\": dial tcp 157.180.92.196:6443: connect: connection refused" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.410880 containerd[1511]: time="2025-07-06T23:58:22.410668052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-2-e8b158d58b,Uid:629dcf05c4e3e02db23291e7e5156339,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:22.424439 containerd[1511]: time="2025-07-06T23:58:22.423156872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-2-e8b158d58b,Uid:639a0feb47aa05e7f913847502174496,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:22.427856 containerd[1511]: time="2025-07-06T23:58:22.427758848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-2-e8b158d58b,Uid:0b2fec5f19c01dba89a5ec7821035218,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:22.552270 kubelet[2344]: E0706 23:58:22.552169 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.92.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-2-e8b158d58b?timeout=10s\": dial tcp 157.180.92.196:6443: connect: connection refused" interval="800ms" Jul 6 23:58:22.718581 kubelet[2344]: I0706 23:58:22.718371 2344 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.718993 kubelet[2344]: E0706 23:58:22.718829 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.92.196:6443/api/v1/nodes\": dial tcp 157.180.92.196:6443: connect: connection refused" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:22.891834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269299962.mount: Deactivated successfully. Jul 6 23:58:22.901088 containerd[1511]: time="2025-07-06T23:58:22.901023755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:58:22.903098 containerd[1511]: time="2025-07-06T23:58:22.903000541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jul 6 23:58:22.904968 containerd[1511]: time="2025-07-06T23:58:22.904904901Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:58:22.906206 containerd[1511]: time="2025-07-06T23:58:22.906053827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:58:22.907219 containerd[1511]: time="2025-07-06T23:58:22.907170722Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:58:22.907840 containerd[1511]: time="2025-07-06T23:58:22.907767497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:58:22.908462 containerd[1511]: time="2025-07-06T23:58:22.908064416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:58:22.911448 containerd[1511]: time="2025-07-06T23:58:22.911371441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:58:22.912535 containerd[1511]: time="2025-07-06T23:58:22.912294541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.288679ms" Jul 6 23:58:22.913973 containerd[1511]: time="2025-07-06T23:58:22.913725949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.949172ms" Jul 6 23:58:22.915660 containerd[1511]: time="2025-07-06T23:58:22.915619348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.341878ms" Jul 6 23:58:23.025082 containerd[1511]: time="2025-07-06T23:58:23.024899619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:23.025574 containerd[1511]: time="2025-07-06T23:58:23.024981212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:23.025574 containerd[1511]: time="2025-07-06T23:58:23.025305263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.027587 containerd[1511]: time="2025-07-06T23:58:23.027508996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.032721 containerd[1511]: time="2025-07-06T23:58:23.032469696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:23.032721 containerd[1511]: time="2025-07-06T23:58:23.032514130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:23.032721 containerd[1511]: time="2025-07-06T23:58:23.032524229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.032853 containerd[1511]: time="2025-07-06T23:58:23.032730578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.034790 containerd[1511]: time="2025-07-06T23:58:23.034569645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:23.034790 containerd[1511]: time="2025-07-06T23:58:23.034609509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:23.034790 containerd[1511]: time="2025-07-06T23:58:23.034619888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.034790 containerd[1511]: time="2025-07-06T23:58:23.034670544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.043419 kubelet[2344]: E0706 23:58:23.043367 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.92.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:58:23.052623 systemd[1]: Started cri-containerd-f861ed991744b2690421da16fa6162d69026fc2e492adf0f1afb3ff75a17e2bf.scope - libcontainer container f861ed991744b2690421da16fa6162d69026fc2e492adf0f1afb3ff75a17e2bf. Jul 6 23:58:23.056922 systemd[1]: Started cri-containerd-017bdba14c415f49bc9d62d7629c0f0dd5fc6b6d2a6c32c48b75d01438da515f.scope - libcontainer container 017bdba14c415f49bc9d62d7629c0f0dd5fc6b6d2a6c32c48b75d01438da515f. Jul 6 23:58:23.059378 systemd[1]: Started cri-containerd-f138e17e8c6d4fa3bd0026a33193dda224ca6748ef7467747db63a0800b0aa93.scope - libcontainer container f138e17e8c6d4fa3bd0026a33193dda224ca6748ef7467747db63a0800b0aa93. Jul 6 23:58:23.110748 containerd[1511]: time="2025-07-06T23:58:23.110713661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-2-e8b158d58b,Uid:639a0feb47aa05e7f913847502174496,Namespace:kube-system,Attempt:0,} returns sandbox id \"f861ed991744b2690421da16fa6162d69026fc2e492adf0f1afb3ff75a17e2bf\"" Jul 6 23:58:23.126587 containerd[1511]: time="2025-07-06T23:58:23.126276394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-2-e8b158d58b,Uid:629dcf05c4e3e02db23291e7e5156339,Namespace:kube-system,Attempt:0,} returns sandbox id \"f138e17e8c6d4fa3bd0026a33193dda224ca6748ef7467747db63a0800b0aa93\"" Jul 6 23:58:23.129784 containerd[1511]: time="2025-07-06T23:58:23.129313828Z" level=info msg="CreateContainer within sandbox \"f861ed991744b2690421da16fa6162d69026fc2e492adf0f1afb3ff75a17e2bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:58:23.131532 containerd[1511]: time="2025-07-06T23:58:23.131289401Z" level=info msg="CreateContainer within sandbox \"f138e17e8c6d4fa3bd0026a33193dda224ca6748ef7467747db63a0800b0aa93\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:58:23.131759 containerd[1511]: time="2025-07-06T23:58:23.131741854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-2-e8b158d58b,Uid:0b2fec5f19c01dba89a5ec7821035218,Namespace:kube-system,Attempt:0,} returns sandbox id \"017bdba14c415f49bc9d62d7629c0f0dd5fc6b6d2a6c32c48b75d01438da515f\"" Jul 6 23:58:23.136113 containerd[1511]: time="2025-07-06T23:58:23.136075252Z" level=info msg="CreateContainer within sandbox \"017bdba14c415f49bc9d62d7629c0f0dd5fc6b6d2a6c32c48b75d01438da515f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:58:23.147572 containerd[1511]: time="2025-07-06T23:58:23.147483614Z" level=info msg="CreateContainer within sandbox \"f861ed991744b2690421da16fa6162d69026fc2e492adf0f1afb3ff75a17e2bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d\"" Jul 6 23:58:23.148297 containerd[1511]: time="2025-07-06T23:58:23.148273293Z" level=info msg="StartContainer for \"7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d\"" Jul 6 23:58:23.150848 containerd[1511]: time="2025-07-06T23:58:23.150774037Z" level=info msg="CreateContainer within sandbox \"017bdba14c415f49bc9d62d7629c0f0dd5fc6b6d2a6c32c48b75d01438da515f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8\"" Jul 6 23:58:23.151792 containerd[1511]: time="2025-07-06T23:58:23.151631873Z" level=info msg="StartContainer for \"93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8\"" Jul 6 23:58:23.151792 containerd[1511]: time="2025-07-06T23:58:23.151726601Z" level=info msg="CreateContainer within sandbox \"f138e17e8c6d4fa3bd0026a33193dda224ca6748ef7467747db63a0800b0aa93\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d3bd8aa5052ff7ffb499f102234c3c9ca08bb92b41f6b29bb87d4bd5560d959\"" Jul 6 23:58:23.154446 containerd[1511]: time="2025-07-06T23:58:23.152673185Z" level=info msg="StartContainer for \"8d3bd8aa5052ff7ffb499f102234c3c9ca08bb92b41f6b29bb87d4bd5560d959\"" Jul 6 23:58:23.177778 systemd[1]: Started cri-containerd-7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d.scope - libcontainer container 7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d. Jul 6 23:58:23.183585 systemd[1]: Started cri-containerd-93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8.scope - libcontainer container 93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8. Jul 6 23:58:23.189662 systemd[1]: Started cri-containerd-8d3bd8aa5052ff7ffb499f102234c3c9ca08bb92b41f6b29bb87d4bd5560d959.scope - libcontainer container 8d3bd8aa5052ff7ffb499f102234c3c9ca08bb92b41f6b29bb87d4bd5560d959. Jul 6 23:58:23.241578 containerd[1511]: time="2025-07-06T23:58:23.240726879Z" level=info msg="StartContainer for \"93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8\" returns successfully" Jul 6 23:58:23.260133 containerd[1511]: time="2025-07-06T23:58:23.260093361Z" level=info msg="StartContainer for \"7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d\" returns successfully" Jul 6 23:58:23.262897 containerd[1511]: time="2025-07-06T23:58:23.262778052Z" level=info msg="StartContainer for \"8d3bd8aa5052ff7ffb499f102234c3c9ca08bb92b41f6b29bb87d4bd5560d959\" returns successfully" Jul 6 23:58:23.353293 kubelet[2344]: E0706 23:58:23.352926 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.92.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-2-e8b158d58b?timeout=10s\": dial tcp 157.180.92.196:6443: connect: connection refused" interval="1.6s" Jul 6 23:58:23.367862 kubelet[2344]: E0706 23:58:23.367811 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.92.196:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:58:23.518448 kubelet[2344]: E0706 23:58:23.517648 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.180.92.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-2-e8b158d58b&limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:58:23.521366 kubelet[2344]: I0706 23:58:23.521340 2344 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:23.522009 kubelet[2344]: E0706 23:58:23.521986 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.92.196:6443/api/v1/nodes\": dial tcp 157.180.92.196:6443: connect: connection refused" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:23.537730 kubelet[2344]: E0706 23:58:23.537689 2344 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.180.92.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.92.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:58:23.991948 kubelet[2344]: E0706 23:58:23.991737 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:23.994686 kubelet[2344]: E0706 23:58:23.993803 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:23.998062 kubelet[2344]: E0706 23:58:23.997856 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:24.958253 kubelet[2344]: E0706 23:58:24.958195 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:24.999015 kubelet[2344]: E0706 23:58:24.998961 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:24.999841 kubelet[2344]: E0706 23:58:24.999605 2344 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.124353 kubelet[2344]: I0706 23:58:25.124210 2344 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.127580 kubelet[2344]: E0706 23:58:25.127537 2344 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-3-4-2-e8b158d58b" not found Jul 6 23:58:25.138503 kubelet[2344]: I0706 23:58:25.138455 2344 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.138503 kubelet[2344]: E0706 23:58:25.138498 2344 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-2-e8b158d58b\": node \"ci-4081-3-4-2-e8b158d58b\" not found" Jul 6 23:58:25.152143 kubelet[2344]: E0706 23:58:25.152081 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" Jul 6 23:58:25.252620 kubelet[2344]: E0706 23:58:25.252462 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" Jul 6 23:58:25.353502 kubelet[2344]: E0706 23:58:25.353437 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" Jul 6 23:58:25.454015 kubelet[2344]: E0706 23:58:25.453952 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-2-e8b158d58b\" not found" Jul 6 23:58:25.544570 kubelet[2344]: I0706 23:58:25.544450 2344 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.551283 kubelet[2344]: E0706 23:58:25.551243 2344 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-2-e8b158d58b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.551283 kubelet[2344]: I0706 23:58:25.551271 2344 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.552803 kubelet[2344]: E0706 23:58:25.552764 2344 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.552803 kubelet[2344]: I0706 23:58:25.552792 2344 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.554161 kubelet[2344]: E0706 23:58:25.554137 2344 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:25.918189 kubelet[2344]: I0706 23:58:25.918109 2344 apiserver.go:52] "Watching apiserver" Jul 6 23:58:25.949590 kubelet[2344]: I0706 23:58:25.949559 2344 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:58:26.569179 kubelet[2344]: I0706 23:58:26.569141 2344 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:26.770990 systemd[1]: Reloading requested from client PID 2625 ('systemctl') (unit session-7.scope)... Jul 6 23:58:26.771006 systemd[1]: Reloading... Jul 6 23:58:26.846519 zram_generator::config[2665]: No configuration found. Jul 6 23:58:26.938660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:58:27.009751 systemd[1]: Reloading finished in 238 ms. Jul 6 23:58:27.046950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:27.059509 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:58:27.059739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:27.065710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:58:27.153698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:58:27.159689 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:58:27.204793 kubelet[2716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:58:27.204793 kubelet[2716]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:58:27.204793 kubelet[2716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:58:27.204793 kubelet[2716]: I0706 23:58:27.200916 2716 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:58:27.212764 kubelet[2716]: I0706 23:58:27.212709 2716 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:58:27.212764 kubelet[2716]: I0706 23:58:27.212734 2716 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:58:27.212952 kubelet[2716]: I0706 23:58:27.212931 2716 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:58:27.213831 kubelet[2716]: I0706 23:58:27.213813 2716 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:58:27.220443 kubelet[2716]: I0706 23:58:27.219616 2716 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:58:27.231894 kubelet[2716]: E0706 23:58:27.231836 2716 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:58:27.232065 kubelet[2716]: I0706 23:58:27.232047 2716 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:58:27.234455 kubelet[2716]: I0706 23:58:27.234437 2716 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:58:27.234686 kubelet[2716]: I0706 23:58:27.234662 2716 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:58:27.234847 kubelet[2716]: I0706 23:58:27.234731 2716 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-2-e8b158d58b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:58:27.234964 kubelet[2716]: I0706 23:58:27.234955 2716 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:58:27.235017 kubelet[2716]: I0706 23:58:27.235010 2716 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:58:27.235089 kubelet[2716]: I0706 23:58:27.235082 2716 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:58:27.235245 kubelet[2716]: I0706 23:58:27.235234 2716 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:58:27.235300 kubelet[2716]: I0706 23:58:27.235293 2716 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:58:27.235350 kubelet[2716]: I0706 23:58:27.235344 2716 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:58:27.235395 kubelet[2716]: I0706 23:58:27.235390 2716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:58:27.244804 kubelet[2716]: I0706 23:58:27.244751 2716 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:58:27.245783 kubelet[2716]: I0706 23:58:27.245719 2716 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:58:27.248183 kubelet[2716]: I0706 23:58:27.248163 2716 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:58:27.248247 kubelet[2716]: I0706 23:58:27.248221 2716 server.go:1289] "Started kubelet" Jul 6 23:58:27.248519 kubelet[2716]: I0706 23:58:27.248382 2716 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:58:27.248936 kubelet[2716]: I0706 23:58:27.248751 2716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:58:27.250125 kubelet[2716]: I0706 23:58:27.249045 2716 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:58:27.251174 kubelet[2716]: I0706 23:58:27.251159 2716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:58:27.252066 kubelet[2716]: I0706 23:58:27.252053 2716 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:58:27.255676 kubelet[2716]: E0706 23:58:27.255650 2716 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:58:27.255971 kubelet[2716]: I0706 23:58:27.255954 2716 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:58:27.256275 kubelet[2716]: I0706 23:58:27.256241 2716 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:58:27.258169 kubelet[2716]: I0706 23:58:27.258142 2716 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:58:27.258364 kubelet[2716]: I0706 23:58:27.258228 2716 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:58:27.260476 kubelet[2716]: I0706 23:58:27.260340 2716 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:58:27.260793 kubelet[2716]: I0706 23:58:27.260589 2716 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:58:27.263436 kubelet[2716]: I0706 23:58:27.262027 2716 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:58:27.264871 kubelet[2716]: I0706 23:58:27.264821 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:58:27.266455 kubelet[2716]: I0706 23:58:27.265986 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:58:27.266455 kubelet[2716]: I0706 23:58:27.266024 2716 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:58:27.266455 kubelet[2716]: I0706 23:58:27.266045 2716 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:58:27.266455 kubelet[2716]: I0706 23:58:27.266051 2716 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:58:27.266455 kubelet[2716]: E0706 23:58:27.266100 2716 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:58:27.310448 kubelet[2716]: I0706 23:58:27.310386 2716 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:58:27.310448 kubelet[2716]: I0706 23:58:27.310403 2716 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:58:27.310448 kubelet[2716]: I0706 23:58:27.310430 2716 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:58:27.310601 kubelet[2716]: I0706 23:58:27.310576 2716 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:58:27.310601 kubelet[2716]: I0706 23:58:27.310586 2716 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:58:27.310601 kubelet[2716]: I0706 23:58:27.310600 2716 policy_none.go:49] "None policy: Start" Jul 6 23:58:27.310648 kubelet[2716]: I0706 23:58:27.310609 2716 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:58:27.310648 kubelet[2716]: I0706 23:58:27.310617 2716 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:58:27.310709 kubelet[2716]: I0706 23:58:27.310684 2716 state_mem.go:75] "Updated machine memory state" Jul 6 23:58:27.313697 kubelet[2716]: E0706 23:58:27.313677 2716 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:58:27.313811 kubelet[2716]: I0706 23:58:27.313794 2716 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:58:27.313859 kubelet[2716]: I0706 23:58:27.313808 2716 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:58:27.315012 kubelet[2716]: I0706 23:58:27.314754 2716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:58:27.316007 kubelet[2716]: E0706 23:58:27.315989 2716 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:58:27.367493 kubelet[2716]: I0706 23:58:27.367026 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.367493 kubelet[2716]: I0706 23:58:27.367026 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.368159 kubelet[2716]: I0706 23:58:27.367659 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.376494 kubelet[2716]: E0706 23:58:27.376229 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.418105 kubelet[2716]: I0706 23:58:27.418049 2716 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.428909 kubelet[2716]: I0706 23:58:27.428744 2716 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.428909 kubelet[2716]: I0706 23:58:27.428848 2716 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.459830 kubelet[2716]: I0706 23:58:27.459618 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/629dcf05c4e3e02db23291e7e5156339-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" (UID: \"629dcf05c4e3e02db23291e7e5156339\") " pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.459830 kubelet[2716]: I0706 23:58:27.459661 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/629dcf05c4e3e02db23291e7e5156339-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" (UID: \"629dcf05c4e3e02db23291e7e5156339\") " pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.459830 kubelet[2716]: I0706 23:58:27.459680 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.459830 kubelet[2716]: I0706 23:58:27.459696 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.459830 kubelet[2716]: I0706 23:58:27.459732 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.460152 kubelet[2716]: I0706 23:58:27.459766 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.460152 kubelet[2716]: I0706 23:58:27.459794 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b2fec5f19c01dba89a5ec7821035218-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-2-e8b158d58b\" (UID: \"0b2fec5f19c01dba89a5ec7821035218\") " pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.460152 kubelet[2716]: I0706 23:58:27.459823 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/629dcf05c4e3e02db23291e7e5156339-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" (UID: \"629dcf05c4e3e02db23291e7e5156339\") " pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:27.460152 kubelet[2716]: I0706 23:58:27.459911 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/639a0feb47aa05e7f913847502174496-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" (UID: \"639a0feb47aa05e7f913847502174496\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:28.236273 kubelet[2716]: I0706 23:58:28.236229 2716 apiserver.go:52] "Watching apiserver" Jul 6 23:58:28.258731 kubelet[2716]: I0706 23:58:28.258653 2716 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:58:28.294330 kubelet[2716]: I0706 23:58:28.294273 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:28.294625 kubelet[2716]: I0706 23:58:28.294597 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:28.295601 kubelet[2716]: I0706 23:58:28.295565 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:28.303155 kubelet[2716]: E0706 23:58:28.303110 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-2-e8b158d58b\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:28.303362 kubelet[2716]: E0706 23:58:28.303337 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-2-e8b158d58b\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:28.304846 kubelet[2716]: E0706 23:58:28.304660 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-2-e8b158d58b\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" Jul 6 23:58:28.318656 kubelet[2716]: I0706 23:58:28.318591 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-2-e8b158d58b" podStartSLOduration=2.318576941 podStartE2EDuration="2.318576941s" podCreationTimestamp="2025-07-06 23:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:28.318486741 +0000 UTC m=+1.151666836" watchObservedRunningTime="2025-07-06 23:58:28.318576941 +0000 UTC m=+1.151757016" Jul 6 23:58:28.340448 kubelet[2716]: I0706 23:58:28.339992 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-2-e8b158d58b" podStartSLOduration=1.339967417 podStartE2EDuration="1.339967417s" podCreationTimestamp="2025-07-06 23:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:28.32630514 +0000 UTC m=+1.159485225" watchObservedRunningTime="2025-07-06 23:58:28.339967417 +0000 UTC m=+1.173147492" Jul 6 23:58:33.000226 kubelet[2716]: I0706 23:58:33.000180 2716 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:58:33.000851 kubelet[2716]: I0706 23:58:33.000727 2716 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:58:33.000948 containerd[1511]: time="2025-07-06T23:58:33.000572848Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:58:34.120983 kubelet[2716]: I0706 23:58:34.120873 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-2-e8b158d58b" podStartSLOduration=7.120837778 podStartE2EDuration="7.120837778s" podCreationTimestamp="2025-07-06 23:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:28.341935505 +0000 UTC m=+1.175115581" watchObservedRunningTime="2025-07-06 23:58:34.120837778 +0000 UTC m=+6.954017853" Jul 6 23:58:34.131113 systemd[1]: Created slice kubepods-besteffort-pod02f99be8_e1f9_49d3_8599_84a87f85a3bc.slice - libcontainer container kubepods-besteffort-pod02f99be8_e1f9_49d3_8599_84a87f85a3bc.slice. Jul 6 23:58:34.163386 systemd[1]: Created slice kubepods-besteffort-podb8fb9b22_1b23_41ea_9eb5_a7fb9a9b79eb.slice - libcontainer container kubepods-besteffort-podb8fb9b22_1b23_41ea_9eb5_a7fb9a9b79eb.slice. Jul 6 23:58:34.204954 kubelet[2716]: I0706 23:58:34.204912 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb-kube-proxy\") pod \"kube-proxy-85fsx\" (UID: \"b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb\") " pod="kube-system/kube-proxy-85fsx" Jul 6 23:58:34.204954 kubelet[2716]: I0706 23:58:34.204947 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pvn8\" (UniqueName: \"kubernetes.io/projected/b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb-kube-api-access-9pvn8\") pod \"kube-proxy-85fsx\" (UID: \"b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb\") " pod="kube-system/kube-proxy-85fsx" Jul 6 23:58:34.204954 kubelet[2716]: I0706 23:58:34.204962 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb-lib-modules\") pod \"kube-proxy-85fsx\" (UID: \"b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb\") " pod="kube-system/kube-proxy-85fsx" Jul 6 23:58:34.205164 kubelet[2716]: I0706 23:58:34.204977 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqb92\" (UniqueName: \"kubernetes.io/projected/02f99be8-e1f9-49d3-8599-84a87f85a3bc-kube-api-access-nqb92\") pod \"tigera-operator-747864d56d-jpqhc\" (UID: \"02f99be8-e1f9-49d3-8599-84a87f85a3bc\") " pod="tigera-operator/tigera-operator-747864d56d-jpqhc" Jul 6 23:58:34.205164 kubelet[2716]: I0706 23:58:34.204988 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb-xtables-lock\") pod \"kube-proxy-85fsx\" (UID: \"b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb\") " pod="kube-system/kube-proxy-85fsx" Jul 6 23:58:34.205164 kubelet[2716]: I0706 23:58:34.205037 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02f99be8-e1f9-49d3-8599-84a87f85a3bc-var-lib-calico\") pod \"tigera-operator-747864d56d-jpqhc\" (UID: \"02f99be8-e1f9-49d3-8599-84a87f85a3bc\") " pod="tigera-operator/tigera-operator-747864d56d-jpqhc" Jul 6 23:58:34.438133 containerd[1511]: time="2025-07-06T23:58:34.437998959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jpqhc,Uid:02f99be8-e1f9-49d3-8599-84a87f85a3bc,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:58:34.460471 containerd[1511]: time="2025-07-06T23:58:34.460123618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:34.460471 containerd[1511]: time="2025-07-06T23:58:34.460170937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:34.460471 containerd[1511]: time="2025-07-06T23:58:34.460182389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:34.460471 containerd[1511]: time="2025-07-06T23:58:34.460240809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:34.467594 containerd[1511]: time="2025-07-06T23:58:34.467191566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85fsx,Uid:b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:34.486751 systemd[1]: Started cri-containerd-7401659e7cb15295cee7f36f4ce23e69992b362289219959bf619e8d4e005fde.scope - libcontainer container 7401659e7cb15295cee7f36f4ce23e69992b362289219959bf619e8d4e005fde. Jul 6 23:58:34.491024 containerd[1511]: time="2025-07-06T23:58:34.490795081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:34.491234 containerd[1511]: time="2025-07-06T23:58:34.490899587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:34.491234 containerd[1511]: time="2025-07-06T23:58:34.491169415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:34.491757 containerd[1511]: time="2025-07-06T23:58:34.491672923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:34.516540 systemd[1]: Started cri-containerd-97be3dde52ce701eb70bc1ac29d2543c088b5cce576853cbaeabcdd418ab5bbe.scope - libcontainer container 97be3dde52ce701eb70bc1ac29d2543c088b5cce576853cbaeabcdd418ab5bbe. Jul 6 23:58:34.527434 containerd[1511]: time="2025-07-06T23:58:34.527319673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jpqhc,Uid:02f99be8-e1f9-49d3-8599-84a87f85a3bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7401659e7cb15295cee7f36f4ce23e69992b362289219959bf619e8d4e005fde\"" Jul 6 23:58:34.530509 containerd[1511]: time="2025-07-06T23:58:34.530350661Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:58:34.538483 containerd[1511]: time="2025-07-06T23:58:34.538435134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85fsx,Uid:b8fb9b22-1b23-41ea-9eb5-a7fb9a9b79eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"97be3dde52ce701eb70bc1ac29d2543c088b5cce576853cbaeabcdd418ab5bbe\"" Jul 6 23:58:34.543654 containerd[1511]: time="2025-07-06T23:58:34.543588466Z" level=info msg="CreateContainer within sandbox \"97be3dde52ce701eb70bc1ac29d2543c088b5cce576853cbaeabcdd418ab5bbe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:58:34.559252 containerd[1511]: time="2025-07-06T23:58:34.559223516Z" level=info msg="CreateContainer within sandbox \"97be3dde52ce701eb70bc1ac29d2543c088b5cce576853cbaeabcdd418ab5bbe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b52ffc43dc37d70e2bba286fb55b30b8c5df4fc76a47aa6083b180956276f64a\"" Jul 6 23:58:34.559940 containerd[1511]: time="2025-07-06T23:58:34.559902745Z" level=info msg="StartContainer for \"b52ffc43dc37d70e2bba286fb55b30b8c5df4fc76a47aa6083b180956276f64a\"" Jul 6 23:58:34.582523 systemd[1]: Started cri-containerd-b52ffc43dc37d70e2bba286fb55b30b8c5df4fc76a47aa6083b180956276f64a.scope - libcontainer container b52ffc43dc37d70e2bba286fb55b30b8c5df4fc76a47aa6083b180956276f64a. Jul 6 23:58:34.602904 containerd[1511]: time="2025-07-06T23:58:34.602866031Z" level=info msg="StartContainer for \"b52ffc43dc37d70e2bba286fb55b30b8c5df4fc76a47aa6083b180956276f64a\" returns successfully" Jul 6 23:58:36.106998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521792275.mount: Deactivated successfully. Jul 6 23:58:36.466759 containerd[1511]: time="2025-07-06T23:58:36.466626706Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:36.467730 containerd[1511]: time="2025-07-06T23:58:36.467573027Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:58:36.468385 containerd[1511]: time="2025-07-06T23:58:36.468343287Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:36.473595 containerd[1511]: time="2025-07-06T23:58:36.472631631Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:36.473706 containerd[1511]: time="2025-07-06T23:58:36.473686907Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.94329078s" Jul 6 23:58:36.473764 containerd[1511]: time="2025-07-06T23:58:36.473752261Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:58:36.480103 containerd[1511]: time="2025-07-06T23:58:36.480070897Z" level=info msg="CreateContainer within sandbox \"7401659e7cb15295cee7f36f4ce23e69992b362289219959bf619e8d4e005fde\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:58:36.488902 containerd[1511]: time="2025-07-06T23:58:36.488868008Z" level=info msg="CreateContainer within sandbox \"7401659e7cb15295cee7f36f4ce23e69992b362289219959bf619e8d4e005fde\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f\"" Jul 6 23:58:36.489726 containerd[1511]: time="2025-07-06T23:58:36.489452459Z" level=info msg="StartContainer for \"807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f\"" Jul 6 23:58:36.515543 systemd[1]: Started cri-containerd-807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f.scope - libcontainer container 807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f. Jul 6 23:58:36.536687 containerd[1511]: time="2025-07-06T23:58:36.536653652Z" level=info msg="StartContainer for \"807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f\" returns successfully" Jul 6 23:58:37.326993 kubelet[2716]: I0706 23:58:37.324284 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-85fsx" podStartSLOduration=3.324266768 podStartE2EDuration="3.324266768s" podCreationTimestamp="2025-07-06 23:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:35.322233252 +0000 UTC m=+8.155413327" watchObservedRunningTime="2025-07-06 23:58:37.324266768 +0000 UTC m=+10.157446853" Jul 6 23:58:38.929429 kubelet[2716]: I0706 23:58:38.927283 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jpqhc" podStartSLOduration=2.97799806 podStartE2EDuration="4.927267808s" podCreationTimestamp="2025-07-06 23:58:34 +0000 UTC" firstStartedPulling="2025-07-06 23:58:34.528563285 +0000 UTC m=+7.361743360" lastFinishedPulling="2025-07-06 23:58:36.477833033 +0000 UTC m=+9.311013108" observedRunningTime="2025-07-06 23:58:37.326879588 +0000 UTC m=+10.160059703" watchObservedRunningTime="2025-07-06 23:58:38.927267808 +0000 UTC m=+11.760447883" Jul 6 23:58:42.602391 sudo[1873]: pam_unix(sudo:session): session closed for user root Jul 6 23:58:42.767327 sshd[1870]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:42.773618 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:58:42.774669 systemd[1]: sshd@6-157.180.92.196:22-147.75.109.163:51206.service: Deactivated successfully. Jul 6 23:58:42.780915 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:58:42.781640 systemd[1]: session-7.scope: Consumed 4.142s CPU time, 145.0M memory peak, 0B memory swap peak. Jul 6 23:58:42.784438 systemd-logind[1486]: Removed session 7. Jul 6 23:58:45.419822 systemd[1]: Created slice kubepods-besteffort-pod50d17794_aa80_4bb3_9eb9_aa487ccb9fc3.slice - libcontainer container kubepods-besteffort-pod50d17794_aa80_4bb3_9eb9_aa487ccb9fc3.slice. Jul 6 23:58:45.492864 kubelet[2716]: I0706 23:58:45.492759 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/50d17794-aa80-4bb3-9eb9-aa487ccb9fc3-typha-certs\") pod \"calico-typha-6bbfffb7db-6n5mr\" (UID: \"50d17794-aa80-4bb3-9eb9-aa487ccb9fc3\") " pod="calico-system/calico-typha-6bbfffb7db-6n5mr" Jul 6 23:58:45.492864 kubelet[2716]: I0706 23:58:45.492802 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50d17794-aa80-4bb3-9eb9-aa487ccb9fc3-tigera-ca-bundle\") pod \"calico-typha-6bbfffb7db-6n5mr\" (UID: \"50d17794-aa80-4bb3-9eb9-aa487ccb9fc3\") " pod="calico-system/calico-typha-6bbfffb7db-6n5mr" Jul 6 23:58:45.492864 kubelet[2716]: I0706 23:58:45.492831 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncqh\" (UniqueName: \"kubernetes.io/projected/50d17794-aa80-4bb3-9eb9-aa487ccb9fc3-kube-api-access-zncqh\") pod \"calico-typha-6bbfffb7db-6n5mr\" (UID: \"50d17794-aa80-4bb3-9eb9-aa487ccb9fc3\") " pod="calico-system/calico-typha-6bbfffb7db-6n5mr" Jul 6 23:58:45.731750 containerd[1511]: time="2025-07-06T23:58:45.731168805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bbfffb7db-6n5mr,Uid:50d17794-aa80-4bb3-9eb9-aa487ccb9fc3,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:45.754337 containerd[1511]: time="2025-07-06T23:58:45.753914975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:45.754665 containerd[1511]: time="2025-07-06T23:58:45.754520383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:45.754665 containerd[1511]: time="2025-07-06T23:58:45.754539770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:45.755360 containerd[1511]: time="2025-07-06T23:58:45.754770394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:45.799592 systemd[1]: Started cri-containerd-0ac7cf57bee924fcdffc26bf366c2ada4f9f808b06f08e5ec2aac3cce6e02b1a.scope - libcontainer container 0ac7cf57bee924fcdffc26bf366c2ada4f9f808b06f08e5ec2aac3cce6e02b1a. Jul 6 23:58:45.846088 systemd[1]: Created slice kubepods-besteffort-podc4833a94_2939_4d41_a78d_443fae59b17b.slice - libcontainer container kubepods-besteffort-podc4833a94_2939_4d41_a78d_443fae59b17b.slice. Jul 6 23:58:45.897065 kubelet[2716]: I0706 23:58:45.896713 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-flexvol-driver-host\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897065 kubelet[2716]: I0706 23:58:45.896752 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-policysync\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897065 kubelet[2716]: I0706 23:58:45.896777 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4833a94-2939-4d41-a78d-443fae59b17b-tigera-ca-bundle\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897065 kubelet[2716]: I0706 23:58:45.896829 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-var-run-calico\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897065 kubelet[2716]: I0706 23:58:45.896847 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-cni-log-dir\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897259 kubelet[2716]: I0706 23:58:45.896861 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-cni-net-dir\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897259 kubelet[2716]: I0706 23:58:45.896874 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-lib-modules\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897259 kubelet[2716]: I0706 23:58:45.896887 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-var-lib-calico\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897259 kubelet[2716]: I0706 23:58:45.896901 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4b4\" (UniqueName: \"kubernetes.io/projected/c4833a94-2939-4d41-a78d-443fae59b17b-kube-api-access-rc4b4\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897259 kubelet[2716]: I0706 23:58:45.896942 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-cni-bin-dir\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897348 kubelet[2716]: I0706 23:58:45.896955 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4833a94-2939-4d41-a78d-443fae59b17b-node-certs\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.897348 kubelet[2716]: I0706 23:58:45.896966 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4833a94-2939-4d41-a78d-443fae59b17b-xtables-lock\") pod \"calico-node-vrxs4\" (UID: \"c4833a94-2939-4d41-a78d-443fae59b17b\") " pod="calico-system/calico-node-vrxs4" Jul 6 23:58:45.901740 containerd[1511]: time="2025-07-06T23:58:45.900078065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bbfffb7db-6n5mr,Uid:50d17794-aa80-4bb3-9eb9-aa487ccb9fc3,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ac7cf57bee924fcdffc26bf366c2ada4f9f808b06f08e5ec2aac3cce6e02b1a\"" Jul 6 23:58:45.905271 containerd[1511]: time="2025-07-06T23:58:45.904535943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:58:46.002492 kubelet[2716]: E0706 23:58:46.001913 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.002617 kubelet[2716]: W0706 23:58:46.002602 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.002709 kubelet[2716]: E0706 23:58:46.002697 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.006385 kubelet[2716]: E0706 23:58:46.006351 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.006385 kubelet[2716]: W0706 23:58:46.006374 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.006503 kubelet[2716]: E0706 23:58:46.006394 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.125831 kubelet[2716]: E0706 23:58:46.125579 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:46.151780 containerd[1511]: time="2025-07-06T23:58:46.151731141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrxs4,Uid:c4833a94-2939-4d41-a78d-443fae59b17b,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:46.176784 containerd[1511]: time="2025-07-06T23:58:46.176365171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:46.176784 containerd[1511]: time="2025-07-06T23:58:46.176522316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:46.176784 containerd[1511]: time="2025-07-06T23:58:46.176539298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:46.176784 containerd[1511]: time="2025-07-06T23:58:46.176625550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:46.181334 kubelet[2716]: E0706 23:58:46.181223 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.181334 kubelet[2716]: W0706 23:58:46.181268 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.181334 kubelet[2716]: E0706 23:58:46.181290 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.182293 kubelet[2716]: E0706 23:58:46.181477 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.182293 kubelet[2716]: W0706 23:58:46.181485 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.182293 kubelet[2716]: E0706 23:58:46.181495 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.182386 kubelet[2716]: E0706 23:58:46.182363 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.182386 kubelet[2716]: W0706 23:58:46.182381 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.182748 kubelet[2716]: E0706 23:58:46.182391 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.182748 kubelet[2716]: E0706 23:58:46.182620 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.182748 kubelet[2716]: W0706 23:58:46.182627 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.182748 kubelet[2716]: E0706 23:58:46.182635 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.182900 kubelet[2716]: E0706 23:58:46.182799 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.183449 kubelet[2716]: W0706 23:58:46.182820 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.183449 kubelet[2716]: E0706 23:58:46.182938 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.183449 kubelet[2716]: E0706 23:58:46.183151 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.183449 kubelet[2716]: W0706 23:58:46.183274 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.183449 kubelet[2716]: E0706 23:58:46.183285 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.183836 kubelet[2716]: E0706 23:58:46.183800 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.183836 kubelet[2716]: W0706 23:58:46.183832 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.183978 kubelet[2716]: E0706 23:58:46.183843 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.184670 kubelet[2716]: E0706 23:58:46.184385 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.184670 kubelet[2716]: W0706 23:58:46.184454 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.184670 kubelet[2716]: E0706 23:58:46.184478 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.184993 kubelet[2716]: E0706 23:58:46.184983 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.185069 kubelet[2716]: W0706 23:58:46.185059 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.185122 kubelet[2716]: E0706 23:58:46.185113 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.185536 kubelet[2716]: E0706 23:58:46.185526 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.185598 kubelet[2716]: W0706 23:58:46.185589 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.185658 kubelet[2716]: E0706 23:58:46.185649 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.186064 kubelet[2716]: E0706 23:58:46.185835 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.186064 kubelet[2716]: W0706 23:58:46.185844 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.186064 kubelet[2716]: E0706 23:58:46.185852 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.186221 kubelet[2716]: E0706 23:58:46.186212 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.186284 kubelet[2716]: W0706 23:58:46.186275 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.186346 kubelet[2716]: E0706 23:58:46.186329 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.186708 kubelet[2716]: E0706 23:58:46.186699 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.186837 kubelet[2716]: W0706 23:58:46.186758 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.186837 kubelet[2716]: E0706 23:58:46.186769 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.187375 kubelet[2716]: E0706 23:58:46.187209 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.187375 kubelet[2716]: W0706 23:58:46.187218 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.187375 kubelet[2716]: E0706 23:58:46.187226 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.188001 kubelet[2716]: E0706 23:58:46.187635 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.188001 kubelet[2716]: W0706 23:58:46.187645 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.188001 kubelet[2716]: E0706 23:58:46.187654 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.188001 kubelet[2716]: E0706 23:58:46.187854 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.188001 kubelet[2716]: W0706 23:58:46.187862 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.188001 kubelet[2716]: E0706 23:58:46.187870 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.188446 kubelet[2716]: E0706 23:58:46.188257 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.188446 kubelet[2716]: W0706 23:58:46.188270 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.188446 kubelet[2716]: E0706 23:58:46.188282 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.188676 kubelet[2716]: E0706 23:58:46.188572 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.188676 kubelet[2716]: W0706 23:58:46.188581 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.188676 kubelet[2716]: E0706 23:58:46.188588 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.190093 kubelet[2716]: E0706 23:58:46.189850 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.190093 kubelet[2716]: W0706 23:58:46.189861 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.190093 kubelet[2716]: E0706 23:58:46.189869 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.190093 kubelet[2716]: E0706 23:58:46.190013 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.190093 kubelet[2716]: W0706 23:58:46.190023 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.190093 kubelet[2716]: E0706 23:58:46.190030 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.199713 systemd[1]: Started cri-containerd-e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262.scope - libcontainer container e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262. Jul 6 23:58:46.199954 kubelet[2716]: E0706 23:58:46.199925 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.199954 kubelet[2716]: W0706 23:58:46.199950 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.200019 kubelet[2716]: E0706 23:58:46.199968 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.200019 kubelet[2716]: I0706 23:58:46.199997 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ca62d42-186f-4f68-9ed0-588257419b27-kubelet-dir\") pod \"csi-node-driver-2k7qf\" (UID: \"0ca62d42-186f-4f68-9ed0-588257419b27\") " pod="calico-system/csi-node-driver-2k7qf" Jul 6 23:58:46.200587 kubelet[2716]: E0706 23:58:46.200561 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.200587 kubelet[2716]: W0706 23:58:46.200584 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.200672 kubelet[2716]: E0706 23:58:46.200595 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.200672 kubelet[2716]: I0706 23:58:46.200616 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0ca62d42-186f-4f68-9ed0-588257419b27-varrun\") pod \"csi-node-driver-2k7qf\" (UID: \"0ca62d42-186f-4f68-9ed0-588257419b27\") " pod="calico-system/csi-node-driver-2k7qf" Jul 6 23:58:46.201188 kubelet[2716]: E0706 23:58:46.201166 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.201459 kubelet[2716]: W0706 23:58:46.201436 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.201459 kubelet[2716]: E0706 23:58:46.201458 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.201546 kubelet[2716]: I0706 23:58:46.201479 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0ca62d42-186f-4f68-9ed0-588257419b27-socket-dir\") pod \"csi-node-driver-2k7qf\" (UID: \"0ca62d42-186f-4f68-9ed0-588257419b27\") " pod="calico-system/csi-node-driver-2k7qf" Jul 6 23:58:46.202095 kubelet[2716]: E0706 23:58:46.202073 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.202095 kubelet[2716]: W0706 23:58:46.202090 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.202190 kubelet[2716]: E0706 23:58:46.202099 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.202615 kubelet[2716]: I0706 23:58:46.202582 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0ca62d42-186f-4f68-9ed0-588257419b27-registration-dir\") pod \"csi-node-driver-2k7qf\" (UID: \"0ca62d42-186f-4f68-9ed0-588257419b27\") " pod="calico-system/csi-node-driver-2k7qf" Jul 6 23:58:46.203104 kubelet[2716]: E0706 23:58:46.203081 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.203104 kubelet[2716]: W0706 23:58:46.203100 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.203104 kubelet[2716]: E0706 23:58:46.203110 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.203549 kubelet[2716]: E0706 23:58:46.203484 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.203549 kubelet[2716]: W0706 23:58:46.203508 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.203549 kubelet[2716]: E0706 23:58:46.203521 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.205025 kubelet[2716]: E0706 23:58:46.204291 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.205025 kubelet[2716]: W0706 23:58:46.204305 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.205025 kubelet[2716]: E0706 23:58:46.204314 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.209228 kubelet[2716]: E0706 23:58:46.209190 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.209228 kubelet[2716]: W0706 23:58:46.209207 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.209228 kubelet[2716]: E0706 23:58:46.209217 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.209576 kubelet[2716]: E0706 23:58:46.209552 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.209576 kubelet[2716]: W0706 23:58:46.209568 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.209576 kubelet[2716]: E0706 23:58:46.209577 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.209876 kubelet[2716]: E0706 23:58:46.209852 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.209876 kubelet[2716]: W0706 23:58:46.209868 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.209876 kubelet[2716]: E0706 23:58:46.209877 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.210167 kubelet[2716]: E0706 23:58:46.210146 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.210167 kubelet[2716]: W0706 23:58:46.210161 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.210167 kubelet[2716]: E0706 23:58:46.210170 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.210485 kubelet[2716]: E0706 23:58:46.210461 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.210485 kubelet[2716]: W0706 23:58:46.210480 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.210485 kubelet[2716]: E0706 23:58:46.210488 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.210583 kubelet[2716]: I0706 23:58:46.210510 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdsh\" (UniqueName: \"kubernetes.io/projected/0ca62d42-186f-4f68-9ed0-588257419b27-kube-api-access-lrdsh\") pod \"csi-node-driver-2k7qf\" (UID: \"0ca62d42-186f-4f68-9ed0-588257419b27\") " pod="calico-system/csi-node-driver-2k7qf" Jul 6 23:58:46.210912 kubelet[2716]: E0706 23:58:46.210888 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.210912 kubelet[2716]: W0706 23:58:46.210905 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.210912 kubelet[2716]: E0706 23:58:46.210914 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.213311 kubelet[2716]: E0706 23:58:46.212324 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.213311 kubelet[2716]: W0706 23:58:46.212335 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.213311 kubelet[2716]: E0706 23:58:46.212344 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.213392 kubelet[2716]: E0706 23:58:46.213363 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.213392 kubelet[2716]: W0706 23:58:46.213375 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.213392 kubelet[2716]: E0706 23:58:46.213383 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.227019 containerd[1511]: time="2025-07-06T23:58:46.226955495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrxs4,Uid:c4833a94-2939-4d41-a78d-443fae59b17b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262\"" Jul 6 23:58:46.311134 kubelet[2716]: E0706 23:58:46.310905 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.311134 kubelet[2716]: W0706 23:58:46.310929 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.311134 kubelet[2716]: E0706 23:58:46.310946 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.311972 kubelet[2716]: E0706 23:58:46.311952 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.311972 kubelet[2716]: W0706 23:58:46.311969 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.311972 kubelet[2716]: E0706 23:58:46.311978 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.312223 kubelet[2716]: E0706 23:58:46.312156 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.312223 kubelet[2716]: W0706 23:58:46.312166 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.312223 kubelet[2716]: E0706 23:58:46.312173 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.312642 kubelet[2716]: E0706 23:58:46.312571 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.312642 kubelet[2716]: W0706 23:58:46.312583 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.312642 kubelet[2716]: E0706 23:58:46.312592 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.313092 kubelet[2716]: E0706 23:58:46.313013 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.313092 kubelet[2716]: W0706 23:58:46.313024 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.313092 kubelet[2716]: E0706 23:58:46.313031 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.313471 kubelet[2716]: E0706 23:58:46.313451 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.313471 kubelet[2716]: W0706 23:58:46.313466 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.313626 kubelet[2716]: E0706 23:58:46.313474 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.314794 kubelet[2716]: E0706 23:58:46.314651 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.314794 kubelet[2716]: W0706 23:58:46.314663 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.314794 kubelet[2716]: E0706 23:58:46.314671 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.314794 kubelet[2716]: E0706 23:58:46.314788 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.314794 kubelet[2716]: W0706 23:58:46.314794 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.315196 kubelet[2716]: E0706 23:58:46.314802 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.315196 kubelet[2716]: E0706 23:58:46.314921 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.315196 kubelet[2716]: W0706 23:58:46.314927 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.315196 kubelet[2716]: E0706 23:58:46.314933 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.315196 kubelet[2716]: E0706 23:58:46.315059 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.315196 kubelet[2716]: W0706 23:58:46.315065 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.315196 kubelet[2716]: E0706 23:58:46.315072 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.315196 kubelet[2716]: E0706 23:58:46.315184 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.315196 kubelet[2716]: W0706 23:58:46.315190 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.315196 kubelet[2716]: E0706 23:58:46.315196 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.315999 kubelet[2716]: E0706 23:58:46.315301 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.315999 kubelet[2716]: W0706 23:58:46.315307 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.315999 kubelet[2716]: E0706 23:58:46.315314 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.315999 kubelet[2716]: E0706 23:58:46.315481 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.315999 kubelet[2716]: W0706 23:58:46.315488 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.315999 kubelet[2716]: E0706 23:58:46.315496 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.315999 kubelet[2716]: E0706 23:58:46.315653 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.315999 kubelet[2716]: W0706 23:58:46.315660 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.315999 kubelet[2716]: E0706 23:58:46.315666 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.315999 kubelet[2716]: E0706 23:58:46.315862 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.316175 kubelet[2716]: W0706 23:58:46.315869 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.316175 kubelet[2716]: E0706 23:58:46.315876 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.316175 kubelet[2716]: E0706 23:58:46.316141 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.316175 kubelet[2716]: W0706 23:58:46.316150 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.316175 kubelet[2716]: E0706 23:58:46.316158 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.316562 kubelet[2716]: E0706 23:58:46.316537 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.316562 kubelet[2716]: W0706 23:58:46.316553 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.316562 kubelet[2716]: E0706 23:58:46.316562 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.316755 kubelet[2716]: E0706 23:58:46.316741 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.316755 kubelet[2716]: W0706 23:58:46.316752 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.316801 kubelet[2716]: E0706 23:58:46.316760 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.316987 kubelet[2716]: E0706 23:58:46.316973 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.316987 kubelet[2716]: W0706 23:58:46.316985 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.317042 kubelet[2716]: E0706 23:58:46.317005 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.317230 kubelet[2716]: E0706 23:58:46.317198 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.317230 kubelet[2716]: W0706 23:58:46.317219 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.317341 kubelet[2716]: E0706 23:58:46.317238 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.317493 kubelet[2716]: E0706 23:58:46.317487 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.317523 kubelet[2716]: W0706 23:58:46.317494 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.317523 kubelet[2716]: E0706 23:58:46.317502 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.317691 kubelet[2716]: E0706 23:58:46.317667 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.317691 kubelet[2716]: W0706 23:58:46.317681 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.317691 kubelet[2716]: E0706 23:58:46.317688 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.317881 kubelet[2716]: E0706 23:58:46.317859 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.317881 kubelet[2716]: W0706 23:58:46.317872 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.317881 kubelet[2716]: E0706 23:58:46.317878 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.319098 kubelet[2716]: E0706 23:58:46.319053 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.319098 kubelet[2716]: W0706 23:58:46.319068 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.319098 kubelet[2716]: E0706 23:58:46.319080 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.320592 kubelet[2716]: E0706 23:58:46.320549 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.320592 kubelet[2716]: W0706 23:58:46.320570 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.320592 kubelet[2716]: E0706 23:58:46.320581 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.328147 kubelet[2716]: E0706 23:58:46.327950 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:46.328147 kubelet[2716]: W0706 23:58:46.327967 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:46.328147 kubelet[2716]: E0706 23:58:46.327984 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:46.605982 systemd[1]: run-containerd-runc-k8s.io-0ac7cf57bee924fcdffc26bf366c2ada4f9f808b06f08e5ec2aac3cce6e02b1a-runc.ZbBxS6.mount: Deactivated successfully. Jul 6 23:58:47.267494 kubelet[2716]: E0706 23:58:47.266992 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:47.749674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697134047.mount: Deactivated successfully. Jul 6 23:58:48.766784 containerd[1511]: time="2025-07-06T23:58:48.766735520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:48.767757 containerd[1511]: time="2025-07-06T23:58:48.767609294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:58:48.768487 containerd[1511]: time="2025-07-06T23:58:48.768443403Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:48.770095 containerd[1511]: time="2025-07-06T23:58:48.770062679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:48.770840 containerd[1511]: time="2025-07-06T23:58:48.770520260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.86594885s" Jul 6 23:58:48.770840 containerd[1511]: time="2025-07-06T23:58:48.770547252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:58:48.771336 containerd[1511]: time="2025-07-06T23:58:48.771309846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:58:48.783875 containerd[1511]: time="2025-07-06T23:58:48.783845689Z" level=info msg="CreateContainer within sandbox \"0ac7cf57bee924fcdffc26bf366c2ada4f9f808b06f08e5ec2aac3cce6e02b1a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:58:48.840959 containerd[1511]: time="2025-07-06T23:58:48.840901330Z" level=info msg="CreateContainer within sandbox \"0ac7cf57bee924fcdffc26bf366c2ada4f9f808b06f08e5ec2aac3cce6e02b1a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"48c16bb2ff8824494fb2b3f8a54ba0e22f12acb1ff50d6ed87f0a32d56106429\"" Jul 6 23:58:48.841772 containerd[1511]: time="2025-07-06T23:58:48.841493724Z" level=info msg="StartContainer for \"48c16bb2ff8824494fb2b3f8a54ba0e22f12acb1ff50d6ed87f0a32d56106429\"" Jul 6 23:58:48.872575 systemd[1]: Started cri-containerd-48c16bb2ff8824494fb2b3f8a54ba0e22f12acb1ff50d6ed87f0a32d56106429.scope - libcontainer container 48c16bb2ff8824494fb2b3f8a54ba0e22f12acb1ff50d6ed87f0a32d56106429. Jul 6 23:58:48.904450 containerd[1511]: time="2025-07-06T23:58:48.904397956Z" level=info msg="StartContainer for \"48c16bb2ff8824494fb2b3f8a54ba0e22f12acb1ff50d6ed87f0a32d56106429\" returns successfully" Jul 6 23:58:49.268658 kubelet[2716]: E0706 23:58:49.268617 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:49.351064 kubelet[2716]: I0706 23:58:49.351003 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bbfffb7db-6n5mr" podStartSLOduration=1.483953718 podStartE2EDuration="4.350984614s" podCreationTimestamp="2025-07-06 23:58:45 +0000 UTC" firstStartedPulling="2025-07-06 23:58:45.904168842 +0000 UTC m=+18.737348917" lastFinishedPulling="2025-07-06 23:58:48.771199728 +0000 UTC m=+21.604379813" observedRunningTime="2025-07-06 23:58:49.350691694 +0000 UTC m=+22.183871799" watchObservedRunningTime="2025-07-06 23:58:49.350984614 +0000 UTC m=+22.184164699" Jul 6 23:58:49.410489 kubelet[2716]: E0706 23:58:49.410441 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.410489 kubelet[2716]: W0706 23:58:49.410469 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.410489 kubelet[2716]: E0706 23:58:49.410492 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.410805 kubelet[2716]: E0706 23:58:49.410688 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.410805 kubelet[2716]: W0706 23:58:49.410696 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.410805 kubelet[2716]: E0706 23:58:49.410705 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.410981 kubelet[2716]: E0706 23:58:49.410868 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.410981 kubelet[2716]: W0706 23:58:49.410878 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.410981 kubelet[2716]: E0706 23:58:49.410885 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.411659 kubelet[2716]: E0706 23:58:49.411032 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.411659 kubelet[2716]: W0706 23:58:49.411040 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.411659 kubelet[2716]: E0706 23:58:49.411047 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.411659 kubelet[2716]: E0706 23:58:49.411235 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.411659 kubelet[2716]: W0706 23:58:49.411247 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.411659 kubelet[2716]: E0706 23:58:49.411260 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.411659 kubelet[2716]: E0706 23:58:49.411486 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.411659 kubelet[2716]: W0706 23:58:49.411495 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.411659 kubelet[2716]: E0706 23:58:49.411506 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.411868 kubelet[2716]: E0706 23:58:49.411740 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.411868 kubelet[2716]: W0706 23:58:49.411749 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.411868 kubelet[2716]: E0706 23:58:49.411759 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.411965 kubelet[2716]: E0706 23:58:49.411942 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.411965 kubelet[2716]: W0706 23:58:49.411956 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.412395 kubelet[2716]: E0706 23:58:49.411966 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.412395 kubelet[2716]: E0706 23:58:49.412146 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.412395 kubelet[2716]: W0706 23:58:49.412155 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.412395 kubelet[2716]: E0706 23:58:49.412168 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.412604 kubelet[2716]: E0706 23:58:49.412583 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.412604 kubelet[2716]: W0706 23:58:49.412597 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.412671 kubelet[2716]: E0706 23:58:49.412608 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.412815 kubelet[2716]: E0706 23:58:49.412777 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.412815 kubelet[2716]: W0706 23:58:49.412808 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.412896 kubelet[2716]: E0706 23:58:49.412818 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.413019 kubelet[2716]: E0706 23:58:49.412990 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.413019 kubelet[2716]: W0706 23:58:49.413013 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.413090 kubelet[2716]: E0706 23:58:49.413024 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.413185 kubelet[2716]: E0706 23:58:49.413163 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.413185 kubelet[2716]: W0706 23:58:49.413179 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.413185 kubelet[2716]: E0706 23:58:49.413187 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.413333 kubelet[2716]: E0706 23:58:49.413319 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.413333 kubelet[2716]: W0706 23:58:49.413329 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.413399 kubelet[2716]: E0706 23:58:49.413336 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.413520 kubelet[2716]: E0706 23:58:49.413490 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.413520 kubelet[2716]: W0706 23:58:49.413498 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.413520 kubelet[2716]: E0706 23:58:49.413506 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.438150 kubelet[2716]: E0706 23:58:49.438111 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.438150 kubelet[2716]: W0706 23:58:49.438138 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.438320 kubelet[2716]: E0706 23:58:49.438160 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.438510 kubelet[2716]: E0706 23:58:49.438439 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.438510 kubelet[2716]: W0706 23:58:49.438453 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.438510 kubelet[2716]: E0706 23:58:49.438467 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.438703 kubelet[2716]: E0706 23:58:49.438679 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.438703 kubelet[2716]: W0706 23:58:49.438695 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.438703 kubelet[2716]: E0706 23:58:49.438704 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.438962 kubelet[2716]: E0706 23:58:49.438940 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.438962 kubelet[2716]: W0706 23:58:49.438957 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.439028 kubelet[2716]: E0706 23:58:49.438968 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.439208 kubelet[2716]: E0706 23:58:49.439183 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.439208 kubelet[2716]: W0706 23:58:49.439200 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.439208 kubelet[2716]: E0706 23:58:49.439210 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.439480 kubelet[2716]: E0706 23:58:49.439459 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.439480 kubelet[2716]: W0706 23:58:49.439474 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.439552 kubelet[2716]: E0706 23:58:49.439529 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.439807 kubelet[2716]: E0706 23:58:49.439765 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.439807 kubelet[2716]: W0706 23:58:49.439785 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.440399 kubelet[2716]: E0706 23:58:49.439811 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.440399 kubelet[2716]: E0706 23:58:49.440039 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.440399 kubelet[2716]: W0706 23:58:49.440047 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.440399 kubelet[2716]: E0706 23:58:49.440056 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.440399 kubelet[2716]: E0706 23:58:49.440216 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.440399 kubelet[2716]: W0706 23:58:49.440224 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.440399 kubelet[2716]: E0706 23:58:49.440234 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.440615 kubelet[2716]: E0706 23:58:49.440427 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.440615 kubelet[2716]: W0706 23:58:49.440439 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.440615 kubelet[2716]: E0706 23:58:49.440458 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.440729 kubelet[2716]: E0706 23:58:49.440645 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.440729 kubelet[2716]: W0706 23:58:49.440654 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.440729 kubelet[2716]: E0706 23:58:49.440662 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.440916 kubelet[2716]: E0706 23:58:49.440885 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.440916 kubelet[2716]: W0706 23:58:49.440897 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.440916 kubelet[2716]: E0706 23:58:49.440905 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.441108 kubelet[2716]: E0706 23:58:49.441083 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.441108 kubelet[2716]: W0706 23:58:49.441099 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.441181 kubelet[2716]: E0706 23:58:49.441109 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.441504 kubelet[2716]: E0706 23:58:49.441482 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.441504 kubelet[2716]: W0706 23:58:49.441495 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.441504 kubelet[2716]: E0706 23:58:49.441505 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.441711 kubelet[2716]: E0706 23:58:49.441684 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.441711 kubelet[2716]: W0706 23:58:49.441693 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.441711 kubelet[2716]: E0706 23:58:49.441703 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.441920 kubelet[2716]: E0706 23:58:49.441895 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.441920 kubelet[2716]: W0706 23:58:49.441912 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.442000 kubelet[2716]: E0706 23:58:49.441922 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.442475 kubelet[2716]: E0706 23:58:49.442459 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.442475 kubelet[2716]: W0706 23:58:49.442473 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.442562 kubelet[2716]: E0706 23:58:49.442482 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:49.442660 kubelet[2716]: E0706 23:58:49.442639 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:49.442660 kubelet[2716]: W0706 23:58:49.442653 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:49.442721 kubelet[2716]: E0706 23:58:49.442661 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.341402 kubelet[2716]: I0706 23:58:50.341340 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:50.420592 kubelet[2716]: E0706 23:58:50.420545 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.420592 kubelet[2716]: W0706 23:58:50.420573 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.420809 kubelet[2716]: E0706 23:58:50.420594 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.420948 kubelet[2716]: E0706 23:58:50.420917 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.420948 kubelet[2716]: W0706 23:58:50.420934 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.420948 kubelet[2716]: E0706 23:58:50.420948 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.421234 kubelet[2716]: E0706 23:58:50.421207 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.421657 kubelet[2716]: W0706 23:58:50.421381 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.421657 kubelet[2716]: E0706 23:58:50.421466 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.421848 kubelet[2716]: E0706 23:58:50.421802 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.421848 kubelet[2716]: W0706 23:58:50.421839 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.422035 kubelet[2716]: E0706 23:58:50.421861 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.422199 kubelet[2716]: E0706 23:58:50.422101 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.422199 kubelet[2716]: W0706 23:58:50.422118 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.422199 kubelet[2716]: E0706 23:58:50.422134 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.422576 kubelet[2716]: E0706 23:58:50.422313 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.422576 kubelet[2716]: W0706 23:58:50.422335 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.422576 kubelet[2716]: E0706 23:58:50.422346 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.422576 kubelet[2716]: E0706 23:58:50.422551 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.422576 kubelet[2716]: W0706 23:58:50.422569 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.423231 kubelet[2716]: E0706 23:58:50.422579 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.423231 kubelet[2716]: E0706 23:58:50.422750 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.423231 kubelet[2716]: W0706 23:58:50.422760 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.423231 kubelet[2716]: E0706 23:58:50.422770 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.423231 kubelet[2716]: E0706 23:58:50.423033 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.423231 kubelet[2716]: W0706 23:58:50.423046 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.423231 kubelet[2716]: E0706 23:58:50.423083 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.423465 kubelet[2716]: E0706 23:58:50.423308 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.423465 kubelet[2716]: W0706 23:58:50.423318 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.423465 kubelet[2716]: E0706 23:58:50.423328 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.423558 kubelet[2716]: E0706 23:58:50.423520 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.423558 kubelet[2716]: W0706 23:58:50.423529 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.423558 kubelet[2716]: E0706 23:58:50.423538 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.423912 kubelet[2716]: E0706 23:58:50.423874 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.423912 kubelet[2716]: W0706 23:58:50.423888 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.423912 kubelet[2716]: E0706 23:58:50.423897 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.424142 kubelet[2716]: E0706 23:58:50.424114 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.424142 kubelet[2716]: W0706 23:58:50.424133 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.424231 kubelet[2716]: E0706 23:58:50.424145 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.424382 kubelet[2716]: E0706 23:58:50.424351 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.424382 kubelet[2716]: W0706 23:58:50.424366 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.424382 kubelet[2716]: E0706 23:58:50.424377 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.424729 kubelet[2716]: E0706 23:58:50.424701 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.424729 kubelet[2716]: W0706 23:58:50.424719 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.424729 kubelet[2716]: E0706 23:58:50.424728 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.449625 kubelet[2716]: E0706 23:58:50.449519 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.449625 kubelet[2716]: W0706 23:58:50.449564 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.449625 kubelet[2716]: E0706 23:58:50.449587 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.450134 kubelet[2716]: E0706 23:58:50.450019 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.450134 kubelet[2716]: W0706 23:58:50.450044 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.450134 kubelet[2716]: E0706 23:58:50.450069 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.450395 kubelet[2716]: E0706 23:58:50.450339 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.450395 kubelet[2716]: W0706 23:58:50.450369 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.450395 kubelet[2716]: E0706 23:58:50.450386 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.450698 kubelet[2716]: E0706 23:58:50.450650 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.450698 kubelet[2716]: W0706 23:58:50.450661 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.450698 kubelet[2716]: E0706 23:58:50.450673 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.450971 kubelet[2716]: E0706 23:58:50.450946 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.451010 kubelet[2716]: W0706 23:58:50.450962 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.451010 kubelet[2716]: E0706 23:58:50.450985 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.451331 kubelet[2716]: E0706 23:58:50.451297 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.451331 kubelet[2716]: W0706 23:58:50.451314 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.451331 kubelet[2716]: E0706 23:58:50.451327 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.451847 kubelet[2716]: E0706 23:58:50.451819 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.451897 kubelet[2716]: W0706 23:58:50.451859 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.451897 kubelet[2716]: E0706 23:58:50.451871 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.452091 kubelet[2716]: E0706 23:58:50.452065 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.452091 kubelet[2716]: W0706 23:58:50.452079 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.452091 kubelet[2716]: E0706 23:58:50.452088 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.452480 kubelet[2716]: E0706 23:58:50.452452 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.452480 kubelet[2716]: W0706 23:58:50.452472 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.452480 kubelet[2716]: E0706 23:58:50.452486 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.452836 kubelet[2716]: E0706 23:58:50.452794 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.452836 kubelet[2716]: W0706 23:58:50.452809 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.452979 kubelet[2716]: E0706 23:58:50.452892 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.453192 kubelet[2716]: E0706 23:58:50.453179 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.453313 kubelet[2716]: W0706 23:58:50.453262 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.453313 kubelet[2716]: E0706 23:58:50.453293 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.453778 kubelet[2716]: E0706 23:58:50.453740 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.453778 kubelet[2716]: W0706 23:58:50.453758 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.453778 kubelet[2716]: E0706 23:58:50.453793 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.454642 kubelet[2716]: E0706 23:58:50.454613 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.454642 kubelet[2716]: W0706 23:58:50.454630 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.454642 kubelet[2716]: E0706 23:58:50.454641 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.454925 kubelet[2716]: E0706 23:58:50.454894 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.454925 kubelet[2716]: W0706 23:58:50.454911 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.455071 kubelet[2716]: E0706 23:58:50.454958 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.455240 kubelet[2716]: E0706 23:58:50.455177 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.455240 kubelet[2716]: W0706 23:58:50.455188 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.455240 kubelet[2716]: E0706 23:58:50.455197 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.455530 kubelet[2716]: E0706 23:58:50.455502 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.455530 kubelet[2716]: W0706 23:58:50.455527 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.455673 kubelet[2716]: E0706 23:58:50.455559 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.455955 kubelet[2716]: E0706 23:58:50.455935 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.455955 kubelet[2716]: W0706 23:58:50.455950 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.456032 kubelet[2716]: E0706 23:58:50.455959 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.456484 kubelet[2716]: E0706 23:58:50.456114 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:50.456484 kubelet[2716]: W0706 23:58:50.456125 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:50.456484 kubelet[2716]: E0706 23:58:50.456133 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:50.591603 containerd[1511]: time="2025-07-06T23:58:50.591497216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:50.593864 containerd[1511]: time="2025-07-06T23:58:50.593208735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:58:50.593952 containerd[1511]: time="2025-07-06T23:58:50.593899204Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:50.595476 containerd[1511]: time="2025-07-06T23:58:50.595433891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:50.596052 containerd[1511]: time="2025-07-06T23:58:50.595896111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.824556049s" Jul 6 23:58:50.596052 containerd[1511]: time="2025-07-06T23:58:50.595925416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:58:50.598954 containerd[1511]: time="2025-07-06T23:58:50.598920831Z" level=info msg="CreateContainer within sandbox \"e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:58:50.611596 containerd[1511]: time="2025-07-06T23:58:50.611548766Z" level=info msg="CreateContainer within sandbox \"e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c\"" Jul 6 23:58:50.612321 containerd[1511]: time="2025-07-06T23:58:50.612301051Z" level=info msg="StartContainer for \"5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c\"" Jul 6 23:58:50.653636 systemd[1]: Started cri-containerd-5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c.scope - libcontainer container 5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c. Jul 6 23:58:50.677270 containerd[1511]: time="2025-07-06T23:58:50.677217177Z" level=info msg="StartContainer for \"5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c\" returns successfully" Jul 6 23:58:50.687672 systemd[1]: cri-containerd-5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c.scope: Deactivated successfully. Jul 6 23:58:50.707397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c-rootfs.mount: Deactivated successfully. Jul 6 23:58:50.755588 containerd[1511]: time="2025-07-06T23:58:50.744105062Z" level=info msg="shim disconnected" id=5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c namespace=k8s.io Jul 6 23:58:50.755588 containerd[1511]: time="2025-07-06T23:58:50.755572404Z" level=warning msg="cleaning up after shim disconnected" id=5f5385bf0b2c546d598e62ec05deaea6ec3cb28cf52a5fa6f33808cebb33b92c namespace=k8s.io Jul 6 23:58:50.755588 containerd[1511]: time="2025-07-06T23:58:50.755587482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:51.267317 kubelet[2716]: E0706 23:58:51.266888 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:51.347185 containerd[1511]: time="2025-07-06T23:58:51.347154853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:58:53.268999 kubelet[2716]: E0706 23:58:53.268952 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:55.198109 kubelet[2716]: I0706 23:58:55.198068 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:55.267474 kubelet[2716]: E0706 23:58:55.266734 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:55.314375 containerd[1511]: time="2025-07-06T23:58:55.314332726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:55.315800 containerd[1511]: time="2025-07-06T23:58:55.315747276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:58:55.316825 containerd[1511]: time="2025-07-06T23:58:55.316790457Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:55.318841 containerd[1511]: time="2025-07-06T23:58:55.318822630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:55.319594 containerd[1511]: time="2025-07-06T23:58:55.319289578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.971943847s" Jul 6 23:58:55.319594 containerd[1511]: time="2025-07-06T23:58:55.319314425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:58:55.322455 containerd[1511]: time="2025-07-06T23:58:55.322429503Z" level=info msg="CreateContainer within sandbox \"e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:58:55.339400 containerd[1511]: time="2025-07-06T23:58:55.339356870Z" level=info msg="CreateContainer within sandbox \"e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96\"" Jul 6 23:58:55.339939 containerd[1511]: time="2025-07-06T23:58:55.339855246Z" level=info msg="StartContainer for \"49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96\"" Jul 6 23:58:55.371569 systemd[1]: Started cri-containerd-49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96.scope - libcontainer container 49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96. Jul 6 23:58:55.403581 containerd[1511]: time="2025-07-06T23:58:55.403539380Z" level=info msg="StartContainer for \"49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96\" returns successfully" Jul 6 23:58:55.770293 systemd[1]: cri-containerd-49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96.scope: Deactivated successfully. Jul 6 23:58:55.793894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96-rootfs.mount: Deactivated successfully. Jul 6 23:58:55.814229 containerd[1511]: time="2025-07-06T23:58:55.814161935Z" level=info msg="shim disconnected" id=49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96 namespace=k8s.io Jul 6 23:58:55.814229 containerd[1511]: time="2025-07-06T23:58:55.814219513Z" level=warning msg="cleaning up after shim disconnected" id=49eadd0358c9475e72d5d71d2494fb93fa41ead0d3078534c9022d8da3096d96 namespace=k8s.io Jul 6 23:58:55.814229 containerd[1511]: time="2025-07-06T23:58:55.814227247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:55.859049 kubelet[2716]: I0706 23:58:55.858344 2716 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:58:55.921622 systemd[1]: Created slice kubepods-besteffort-pod1cc81f56_9823_43af_b0f7_cbcd23fb8712.slice - libcontainer container kubepods-besteffort-pod1cc81f56_9823_43af_b0f7_cbcd23fb8712.slice. Jul 6 23:58:55.928034 systemd[1]: Created slice kubepods-besteffort-pod3959008a_f171_4a04_98c6_424b260ddb24.slice - libcontainer container kubepods-besteffort-pod3959008a_f171_4a04_98c6_424b260ddb24.slice. Jul 6 23:58:55.934634 systemd[1]: Created slice kubepods-besteffort-pod7f00b71d_ed32_42ca_b9e1_292600c3ce63.slice - libcontainer container kubepods-besteffort-pod7f00b71d_ed32_42ca_b9e1_292600c3ce63.slice. Jul 6 23:58:55.943006 systemd[1]: Created slice kubepods-besteffort-podca3bb848_e578_4ffb_b363_75e2789c4189.slice - libcontainer container kubepods-besteffort-podca3bb848_e578_4ffb_b363_75e2789c4189.slice. Jul 6 23:58:55.948157 systemd[1]: Created slice kubepods-burstable-poddb7c0d17_ff75_46b9_b7cd_e8f7f2291fe7.slice - libcontainer container kubepods-burstable-poddb7c0d17_ff75_46b9_b7cd_e8f7f2291fe7.slice. Jul 6 23:58:55.954206 systemd[1]: Created slice kubepods-besteffort-pod51e7e008_60ac_4100_81fe_67be6774ad5f.slice - libcontainer container kubepods-besteffort-pod51e7e008_60ac_4100_81fe_67be6774ad5f.slice. Jul 6 23:58:55.960385 systemd[1]: Created slice kubepods-burstable-podf3c3c4c9_a36c_4c4b_a180_4d2623fe83c4.slice - libcontainer container kubepods-burstable-podf3c3c4c9_a36c_4c4b_a180_4d2623fe83c4.slice. Jul 6 23:58:55.991843 kubelet[2716]: I0706 23:58:55.991805 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cxd\" (UniqueName: \"kubernetes.io/projected/7f00b71d-ed32-42ca-b9e1-292600c3ce63-kube-api-access-s2cxd\") pod \"calico-apiserver-575b6688c4-kx7lz\" (UID: \"7f00b71d-ed32-42ca-b9e1-292600c3ce63\") " pod="calico-apiserver/calico-apiserver-575b6688c4-kx7lz" Jul 6 23:58:55.991843 kubelet[2716]: I0706 23:58:55.991844 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pn6\" (UniqueName: \"kubernetes.io/projected/1cc81f56-9823-43af-b0f7-cbcd23fb8712-kube-api-access-p4pn6\") pod \"whisker-7f5c8dd9b9-c5mwr\" (UID: \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\") " pod="calico-system/whisker-7f5c8dd9b9-c5mwr" Jul 6 23:58:55.992000 kubelet[2716]: I0706 23:58:55.991874 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtcp\" (UniqueName: \"kubernetes.io/projected/3959008a-f171-4a04-98c6-424b260ddb24-kube-api-access-7mtcp\") pod \"calico-kube-controllers-985fd5996-856v9\" (UID: \"3959008a-f171-4a04-98c6-424b260ddb24\") " pod="calico-system/calico-kube-controllers-985fd5996-856v9" Jul 6 23:58:55.992000 kubelet[2716]: I0706 23:58:55.991889 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxfhk\" (UniqueName: \"kubernetes.io/projected/f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4-kube-api-access-sxfhk\") pod \"coredns-674b8bbfcf-szql8\" (UID: \"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4\") " pod="kube-system/coredns-674b8bbfcf-szql8" Jul 6 23:58:55.992000 kubelet[2716]: I0706 23:58:55.991906 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e7e008-60ac-4100-81fe-67be6774ad5f-config\") pod \"goldmane-768f4c5c69-966nl\" (UID: \"51e7e008-60ac-4100-81fe-67be6774ad5f\") " pod="calico-system/goldmane-768f4c5c69-966nl" Jul 6 23:58:55.992000 kubelet[2716]: I0706 23:58:55.991919 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51e7e008-60ac-4100-81fe-67be6774ad5f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-966nl\" (UID: \"51e7e008-60ac-4100-81fe-67be6774ad5f\") " pod="calico-system/goldmane-768f4c5c69-966nl" Jul 6 23:58:55.992000 kubelet[2716]: I0706 23:58:55.991933 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7-config-volume\") pod \"coredns-674b8bbfcf-x79tg\" (UID: \"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7\") " pod="kube-system/coredns-674b8bbfcf-x79tg" Jul 6 23:58:55.992092 kubelet[2716]: I0706 23:58:55.991949 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-backend-key-pair\") pod \"whisker-7f5c8dd9b9-c5mwr\" (UID: \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\") " pod="calico-system/whisker-7f5c8dd9b9-c5mwr" Jul 6 23:58:55.992092 kubelet[2716]: I0706 23:58:55.991963 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca3bb848-e578-4ffb-b363-75e2789c4189-calico-apiserver-certs\") pod \"calico-apiserver-575b6688c4-pbb6c\" (UID: \"ca3bb848-e578-4ffb-b363-75e2789c4189\") " pod="calico-apiserver/calico-apiserver-575b6688c4-pbb6c" Jul 6 23:58:55.992092 kubelet[2716]: I0706 23:58:55.991975 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psc2f\" (UniqueName: \"kubernetes.io/projected/db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7-kube-api-access-psc2f\") pod \"coredns-674b8bbfcf-x79tg\" (UID: \"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7\") " pod="kube-system/coredns-674b8bbfcf-x79tg" Jul 6 23:58:55.992092 kubelet[2716]: I0706 23:58:55.991989 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9262\" (UniqueName: \"kubernetes.io/projected/51e7e008-60ac-4100-81fe-67be6774ad5f-kube-api-access-f9262\") pod \"goldmane-768f4c5c69-966nl\" (UID: \"51e7e008-60ac-4100-81fe-67be6774ad5f\") " pod="calico-system/goldmane-768f4c5c69-966nl" Jul 6 23:58:55.992092 kubelet[2716]: I0706 23:58:55.992001 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-ca-bundle\") pod \"whisker-7f5c8dd9b9-c5mwr\" (UID: \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\") " pod="calico-system/whisker-7f5c8dd9b9-c5mwr" Jul 6 23:58:55.992180 kubelet[2716]: I0706 23:58:55.992012 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3959008a-f171-4a04-98c6-424b260ddb24-tigera-ca-bundle\") pod \"calico-kube-controllers-985fd5996-856v9\" (UID: \"3959008a-f171-4a04-98c6-424b260ddb24\") " pod="calico-system/calico-kube-controllers-985fd5996-856v9" Jul 6 23:58:55.992180 kubelet[2716]: I0706 23:58:55.992025 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2r2v\" (UniqueName: \"kubernetes.io/projected/ca3bb848-e578-4ffb-b363-75e2789c4189-kube-api-access-g2r2v\") pod \"calico-apiserver-575b6688c4-pbb6c\" (UID: \"ca3bb848-e578-4ffb-b363-75e2789c4189\") " pod="calico-apiserver/calico-apiserver-575b6688c4-pbb6c" Jul 6 23:58:55.992180 kubelet[2716]: I0706 23:58:55.992048 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f00b71d-ed32-42ca-b9e1-292600c3ce63-calico-apiserver-certs\") pod \"calico-apiserver-575b6688c4-kx7lz\" (UID: \"7f00b71d-ed32-42ca-b9e1-292600c3ce63\") " pod="calico-apiserver/calico-apiserver-575b6688c4-kx7lz" Jul 6 23:58:55.992180 kubelet[2716]: I0706 23:58:55.992067 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/51e7e008-60ac-4100-81fe-67be6774ad5f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-966nl\" (UID: \"51e7e008-60ac-4100-81fe-67be6774ad5f\") " pod="calico-system/goldmane-768f4c5c69-966nl" Jul 6 23:58:55.992180 kubelet[2716]: I0706 23:58:55.992079 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4-config-volume\") pod \"coredns-674b8bbfcf-szql8\" (UID: \"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4\") " pod="kube-system/coredns-674b8bbfcf-szql8" Jul 6 23:58:56.230804 containerd[1511]: time="2025-07-06T23:58:56.230744922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5c8dd9b9-c5mwr,Uid:1cc81f56-9823-43af-b0f7-cbcd23fb8712,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:56.239884 containerd[1511]: time="2025-07-06T23:58:56.239841920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-985fd5996-856v9,Uid:3959008a-f171-4a04-98c6-424b260ddb24,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:56.240339 containerd[1511]: time="2025-07-06T23:58:56.240302758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-kx7lz,Uid:7f00b71d-ed32-42ca-b9e1-292600c3ce63,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:58:56.246032 containerd[1511]: time="2025-07-06T23:58:56.246005062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-pbb6c,Uid:ca3bb848-e578-4ffb-b363-75e2789c4189,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:58:56.250818 containerd[1511]: time="2025-07-06T23:58:56.250785593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x79tg,Uid:db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:56.261664 containerd[1511]: time="2025-07-06T23:58:56.261435512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-966nl,Uid:51e7e008-60ac-4100-81fe-67be6774ad5f,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:56.263651 containerd[1511]: time="2025-07-06T23:58:56.263354040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szql8,Uid:f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:56.426012 containerd[1511]: time="2025-07-06T23:58:56.425980145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:58:56.539067 containerd[1511]: time="2025-07-06T23:58:56.538831312Z" level=error msg="Failed to destroy network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.539067 containerd[1511]: time="2025-07-06T23:58:56.538876036Z" level=error msg="Failed to destroy network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.542201 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d-shm.mount: Deactivated successfully. Jul 6 23:58:56.542282 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af-shm.mount: Deactivated successfully. Jul 6 23:58:56.543392 containerd[1511]: time="2025-07-06T23:58:56.543367733Z" level=error msg="encountered an error cleaning up failed sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.544517 containerd[1511]: time="2025-07-06T23:58:56.544493560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-kx7lz,Uid:7f00b71d-ed32-42ca-b9e1-292600c3ce63,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.545610 containerd[1511]: time="2025-07-06T23:58:56.545579582Z" level=error msg="Failed to destroy network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.547948 containerd[1511]: time="2025-07-06T23:58:56.547647652Z" level=error msg="encountered an error cleaning up failed sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.547948 containerd[1511]: time="2025-07-06T23:58:56.547697836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-pbb6c,Uid:ca3bb848-e578-4ffb-b363-75e2789c4189,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.549475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb-shm.mount: Deactivated successfully. Jul 6 23:58:56.554088 kubelet[2716]: E0706 23:58:56.554048 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.554622 kubelet[2716]: E0706 23:58:56.554117 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b6688c4-pbb6c" Jul 6 23:58:56.554622 kubelet[2716]: E0706 23:58:56.554138 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b6688c4-pbb6c" Jul 6 23:58:56.554622 kubelet[2716]: E0706 23:58:56.554184 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575b6688c4-pbb6c_calico-apiserver(ca3bb848-e578-4ffb-b363-75e2789c4189)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575b6688c4-pbb6c_calico-apiserver(ca3bb848-e578-4ffb-b363-75e2789c4189)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b6688c4-pbb6c" podUID="ca3bb848-e578-4ffb-b363-75e2789c4189" Jul 6 23:58:56.555848 kubelet[2716]: E0706 23:58:56.555217 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.555848 kubelet[2716]: E0706 23:58:56.555263 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b6688c4-kx7lz" Jul 6 23:58:56.555848 kubelet[2716]: E0706 23:58:56.555278 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-575b6688c4-kx7lz" Jul 6 23:58:56.555943 kubelet[2716]: E0706 23:58:56.555307 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-575b6688c4-kx7lz_calico-apiserver(7f00b71d-ed32-42ca-b9e1-292600c3ce63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-575b6688c4-kx7lz_calico-apiserver(7f00b71d-ed32-42ca-b9e1-292600c3ce63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b6688c4-kx7lz" podUID="7f00b71d-ed32-42ca-b9e1-292600c3ce63" Jul 6 23:58:56.556481 containerd[1511]: time="2025-07-06T23:58:56.556341672Z" level=error msg="encountered an error cleaning up failed sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.556481 containerd[1511]: time="2025-07-06T23:58:56.556390574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x79tg,Uid:db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.556606 kubelet[2716]: E0706 23:58:56.556551 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.556636 kubelet[2716]: E0706 23:58:56.556578 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x79tg" Jul 6 23:58:56.556636 kubelet[2716]: E0706 23:58:56.556626 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x79tg" Jul 6 23:58:56.557429 kubelet[2716]: E0706 23:58:56.556783 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x79tg_kube-system(db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x79tg_kube-system(db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x79tg" podUID="db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7" Jul 6 23:58:56.561116 containerd[1511]: time="2025-07-06T23:58:56.561076006Z" level=error msg="Failed to destroy network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.561898 containerd[1511]: time="2025-07-06T23:58:56.561866773Z" level=error msg="encountered an error cleaning up failed sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.561941 containerd[1511]: time="2025-07-06T23:58:56.561910576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-966nl,Uid:51e7e008-60ac-4100-81fe-67be6774ad5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.562548 kubelet[2716]: E0706 23:58:56.562217 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.562548 kubelet[2716]: E0706 23:58:56.562247 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-966nl" Jul 6 23:58:56.562548 kubelet[2716]: E0706 23:58:56.562261 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-966nl" Jul 6 23:58:56.564119 kubelet[2716]: E0706 23:58:56.562320 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-966nl_calico-system(51e7e008-60ac-4100-81fe-67be6774ad5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-966nl_calico-system(51e7e008-60ac-4100-81fe-67be6774ad5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-966nl" podUID="51e7e008-60ac-4100-81fe-67be6774ad5f" Jul 6 23:58:56.565490 containerd[1511]: time="2025-07-06T23:58:56.565452707Z" level=error msg="Failed to destroy network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.565823 containerd[1511]: time="2025-07-06T23:58:56.565802555Z" level=error msg="encountered an error cleaning up failed sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.565996 containerd[1511]: time="2025-07-06T23:58:56.565919685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-985fd5996-856v9,Uid:3959008a-f171-4a04-98c6-424b260ddb24,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.566172 kubelet[2716]: E0706 23:58:56.566031 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.566172 kubelet[2716]: E0706 23:58:56.566143 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-985fd5996-856v9" Jul 6 23:58:56.566172 kubelet[2716]: E0706 23:58:56.566158 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-985fd5996-856v9" Jul 6 23:58:56.566284 kubelet[2716]: E0706 23:58:56.566193 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-985fd5996-856v9_calico-system(3959008a-f171-4a04-98c6-424b260ddb24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-985fd5996-856v9_calico-system(3959008a-f171-4a04-98c6-424b260ddb24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-985fd5996-856v9" podUID="3959008a-f171-4a04-98c6-424b260ddb24" Jul 6 23:58:56.566519 containerd[1511]: time="2025-07-06T23:58:56.566395580Z" level=error msg="Failed to destroy network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.566760 containerd[1511]: time="2025-07-06T23:58:56.566720362Z" level=error msg="encountered an error cleaning up failed sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.566847 containerd[1511]: time="2025-07-06T23:58:56.566828615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5c8dd9b9-c5mwr,Uid:1cc81f56-9823-43af-b0f7-cbcd23fb8712,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.567152 kubelet[2716]: E0706 23:58:56.567029 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.567152 kubelet[2716]: E0706 23:58:56.567066 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5c8dd9b9-c5mwr" Jul 6 23:58:56.567152 kubelet[2716]: E0706 23:58:56.567082 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5c8dd9b9-c5mwr" Jul 6 23:58:56.567242 kubelet[2716]: E0706 23:58:56.567109 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f5c8dd9b9-c5mwr_calico-system(1cc81f56-9823-43af-b0f7-cbcd23fb8712)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f5c8dd9b9-c5mwr_calico-system(1cc81f56-9823-43af-b0f7-cbcd23fb8712)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f5c8dd9b9-c5mwr" podUID="1cc81f56-9823-43af-b0f7-cbcd23fb8712" Jul 6 23:58:56.568496 containerd[1511]: time="2025-07-06T23:58:56.568462358Z" level=error msg="Failed to destroy network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.568799 containerd[1511]: time="2025-07-06T23:58:56.568769144Z" level=error msg="encountered an error cleaning up failed sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.568951 containerd[1511]: time="2025-07-06T23:58:56.568805582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szql8,Uid:f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.569001 kubelet[2716]: E0706 23:58:56.568908 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:56.569001 kubelet[2716]: E0706 23:58:56.568931 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-szql8" Jul 6 23:58:56.569001 kubelet[2716]: E0706 23:58:56.568946 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-szql8" Jul 6 23:58:56.569105 kubelet[2716]: E0706 23:58:56.568973 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-szql8_kube-system(f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-szql8_kube-system(f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-szql8" podUID="f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4" Jul 6 23:58:57.272705 systemd[1]: Created slice kubepods-besteffort-pod0ca62d42_186f_4f68_9ed0_588257419b27.slice - libcontainer container kubepods-besteffort-pod0ca62d42_186f_4f68_9ed0_588257419b27.slice. Jul 6 23:58:57.274905 containerd[1511]: time="2025-07-06T23:58:57.274872501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2k7qf,Uid:0ca62d42-186f-4f68-9ed0-588257419b27,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:57.335017 containerd[1511]: time="2025-07-06T23:58:57.334956556Z" level=error msg="Failed to destroy network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.336633 containerd[1511]: time="2025-07-06T23:58:57.336580670Z" level=error msg="encountered an error cleaning up failed sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.336704 containerd[1511]: time="2025-07-06T23:58:57.336651192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2k7qf,Uid:0ca62d42-186f-4f68-9ed0-588257419b27,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.336920 kubelet[2716]: E0706 23:58:57.336882 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.336971 kubelet[2716]: E0706 23:58:57.336936 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2k7qf" Jul 6 23:58:57.336971 kubelet[2716]: E0706 23:58:57.336959 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2k7qf" Jul 6 23:58:57.337017 kubelet[2716]: E0706 23:58:57.337000 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2k7qf_calico-system(0ca62d42-186f-4f68-9ed0-588257419b27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2k7qf_calico-system(0ca62d42-186f-4f68-9ed0-588257419b27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:57.337671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a-shm.mount: Deactivated successfully. Jul 6 23:58:57.337743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79-shm.mount: Deactivated successfully. Jul 6 23:58:57.337806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2-shm.mount: Deactivated successfully. Jul 6 23:58:57.337855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0-shm.mount: Deactivated successfully. Jul 6 23:58:57.421848 kubelet[2716]: I0706 23:58:57.421801 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:58:57.423873 kubelet[2716]: I0706 23:58:57.423820 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:58:57.429231 containerd[1511]: time="2025-07-06T23:58:57.429171937Z" level=info msg="StopPodSandbox for \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\"" Jul 6 23:58:57.431488 containerd[1511]: time="2025-07-06T23:58:57.431340064Z" level=info msg="Ensure that sandbox 2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af in task-service has been cleanup successfully" Jul 6 23:58:57.432517 containerd[1511]: time="2025-07-06T23:58:57.432024602Z" level=info msg="StopPodSandbox for \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\"" Jul 6 23:58:57.432571 containerd[1511]: time="2025-07-06T23:58:57.432519302Z" level=info msg="Ensure that sandbox 474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79 in task-service has been cleanup successfully" Jul 6 23:58:57.435402 kubelet[2716]: I0706 23:58:57.434562 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:58:57.435794 containerd[1511]: time="2025-07-06T23:58:57.435769645Z" level=info msg="StopPodSandbox for \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\"" Jul 6 23:58:57.435907 containerd[1511]: time="2025-07-06T23:58:57.435883398Z" level=info msg="Ensure that sandbox f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a in task-service has been cleanup successfully" Jul 6 23:58:57.441775 kubelet[2716]: I0706 23:58:57.441220 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:58:57.443992 containerd[1511]: time="2025-07-06T23:58:57.443424922Z" level=info msg="StopPodSandbox for \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\"" Jul 6 23:58:57.444722 containerd[1511]: time="2025-07-06T23:58:57.444573341Z" level=info msg="Ensure that sandbox c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb in task-service has been cleanup successfully" Jul 6 23:58:57.449861 kubelet[2716]: I0706 23:58:57.449724 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:58:57.453622 containerd[1511]: time="2025-07-06T23:58:57.453156131Z" level=info msg="StopPodSandbox for \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\"" Jul 6 23:58:57.453622 containerd[1511]: time="2025-07-06T23:58:57.453293650Z" level=info msg="Ensure that sandbox 0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d in task-service has been cleanup successfully" Jul 6 23:58:57.458743 kubelet[2716]: I0706 23:58:57.458721 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:58:57.462441 containerd[1511]: time="2025-07-06T23:58:57.461842407Z" level=info msg="StopPodSandbox for \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\"" Jul 6 23:58:57.462441 containerd[1511]: time="2025-07-06T23:58:57.461971740Z" level=info msg="Ensure that sandbox 0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78 in task-service has been cleanup successfully" Jul 6 23:58:57.468297 kubelet[2716]: I0706 23:58:57.468171 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:58:57.472100 containerd[1511]: time="2025-07-06T23:58:57.472073728Z" level=info msg="StopPodSandbox for \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\"" Jul 6 23:58:57.473302 kubelet[2716]: I0706 23:58:57.473282 2716 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:58:57.473543 containerd[1511]: time="2025-07-06T23:58:57.473525127Z" level=info msg="Ensure that sandbox aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2 in task-service has been cleanup successfully" Jul 6 23:58:57.474120 containerd[1511]: time="2025-07-06T23:58:57.473791349Z" level=info msg="StopPodSandbox for \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\"" Jul 6 23:58:57.474312 containerd[1511]: time="2025-07-06T23:58:57.474297449Z" level=info msg="Ensure that sandbox a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0 in task-service has been cleanup successfully" Jul 6 23:58:57.511535 containerd[1511]: time="2025-07-06T23:58:57.511483883Z" level=error msg="StopPodSandbox for \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\" failed" error="failed to destroy network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.511778 kubelet[2716]: E0706 23:58:57.511676 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:58:57.511778 kubelet[2716]: E0706 23:58:57.511721 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79"} Jul 6 23:58:57.513095 kubelet[2716]: E0706 23:58:57.511792 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51e7e008-60ac-4100-81fe-67be6774ad5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.513095 kubelet[2716]: E0706 23:58:57.511813 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51e7e008-60ac-4100-81fe-67be6774ad5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-966nl" podUID="51e7e008-60ac-4100-81fe-67be6774ad5f" Jul 6 23:58:57.523863 containerd[1511]: time="2025-07-06T23:58:57.523674138Z" level=error msg="StopPodSandbox for \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\" failed" error="failed to destroy network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.525330 kubelet[2716]: E0706 23:58:57.524312 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:58:57.525330 kubelet[2716]: E0706 23:58:57.524374 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af"} Jul 6 23:58:57.525330 kubelet[2716]: E0706 23:58:57.524402 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f00b71d-ed32-42ca-b9e1-292600c3ce63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.525330 kubelet[2716]: E0706 23:58:57.525290 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f00b71d-ed32-42ca-b9e1-292600c3ce63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b6688c4-kx7lz" podUID="7f00b71d-ed32-42ca-b9e1-292600c3ce63" Jul 6 23:58:57.535731 containerd[1511]: time="2025-07-06T23:58:57.535633709Z" level=error msg="StopPodSandbox for \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\" failed" error="failed to destroy network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.536138 kubelet[2716]: E0706 23:58:57.535963 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:58:57.536138 kubelet[2716]: E0706 23:58:57.536011 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb"} Jul 6 23:58:57.536138 kubelet[2716]: E0706 23:58:57.536043 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.536138 kubelet[2716]: E0706 23:58:57.536063 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x79tg" podUID="db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7" Jul 6 23:58:57.536708 containerd[1511]: time="2025-07-06T23:58:57.536488697Z" level=error msg="StopPodSandbox for \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\" failed" error="failed to destroy network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.536870 kubelet[2716]: E0706 23:58:57.536681 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:58:57.536870 kubelet[2716]: E0706 23:58:57.536807 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a"} Jul 6 23:58:57.536870 kubelet[2716]: E0706 23:58:57.536830 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.537017 kubelet[2716]: E0706 23:58:57.536846 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-szql8" podUID="f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4" Jul 6 23:58:57.542064 containerd[1511]: time="2025-07-06T23:58:57.542028595Z" level=error msg="StopPodSandbox for \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\" failed" error="failed to destroy network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.542437 kubelet[2716]: E0706 23:58:57.542378 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:58:57.542610 kubelet[2716]: E0706 23:58:57.542418 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78"} Jul 6 23:58:57.542610 kubelet[2716]: E0706 23:58:57.542553 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ca62d42-186f-4f68-9ed0-588257419b27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.542610 kubelet[2716]: E0706 23:58:57.542572 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ca62d42-186f-4f68-9ed0-588257419b27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2k7qf" podUID="0ca62d42-186f-4f68-9ed0-588257419b27" Jul 6 23:58:57.546365 containerd[1511]: time="2025-07-06T23:58:57.546175814Z" level=error msg="StopPodSandbox for \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\" failed" error="failed to destroy network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.547322 kubelet[2716]: E0706 23:58:57.547138 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:58:57.547322 kubelet[2716]: E0706 23:58:57.547168 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d"} Jul 6 23:58:57.547322 kubelet[2716]: E0706 23:58:57.547209 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca3bb848-e578-4ffb-b363-75e2789c4189\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.547322 kubelet[2716]: E0706 23:58:57.547237 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca3bb848-e578-4ffb-b363-75e2789c4189\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-575b6688c4-pbb6c" podUID="ca3bb848-e578-4ffb-b363-75e2789c4189" Jul 6 23:58:57.548472 containerd[1511]: time="2025-07-06T23:58:57.548450802Z" level=error msg="StopPodSandbox for \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\" failed" error="failed to destroy network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.548644 containerd[1511]: time="2025-07-06T23:58:57.548582189Z" level=error msg="StopPodSandbox for \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\" failed" error="failed to destroy network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:57.548878 kubelet[2716]: E0706 23:58:57.548852 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:58:57.548954 kubelet[2716]: E0706 23:58:57.548921 2716 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:58:57.548954 kubelet[2716]: E0706 23:58:57.548951 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0"} Jul 6 23:58:57.549024 kubelet[2716]: E0706 23:58:57.548982 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.549024 kubelet[2716]: E0706 23:58:57.549000 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f5c8dd9b9-c5mwr" podUID="1cc81f56-9823-43af-b0f7-cbcd23fb8712" Jul 6 23:58:57.549024 kubelet[2716]: E0706 23:58:57.548934 2716 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2"} Jul 6 23:58:57.549111 kubelet[2716]: E0706 23:58:57.549025 2716 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3959008a-f171-4a04-98c6-424b260ddb24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:57.549111 kubelet[2716]: E0706 23:58:57.549039 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3959008a-f171-4a04-98c6-424b260ddb24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-985fd5996-856v9" podUID="3959008a-f171-4a04-98c6-424b260ddb24" Jul 6 23:59:03.776779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186139241.mount: Deactivated successfully. Jul 6 23:59:03.847124 containerd[1511]: time="2025-07-06T23:59:03.844907191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:03.847486 containerd[1511]: time="2025-07-06T23:59:03.843540672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:59:03.869441 containerd[1511]: time="2025-07-06T23:59:03.868867291Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:03.872400 containerd[1511]: time="2025-07-06T23:59:03.871998479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:03.872400 containerd[1511]: time="2025-07-06T23:59:03.872325293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.443342691s" Jul 6 23:59:03.872400 containerd[1511]: time="2025-07-06T23:59:03.872352706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:59:03.907659 containerd[1511]: time="2025-07-06T23:59:03.907616068Z" level=info msg="CreateContainer within sandbox \"e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:59:03.946464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630140322.mount: Deactivated successfully. Jul 6 23:59:03.958567 containerd[1511]: time="2025-07-06T23:59:03.958513456Z" level=info msg="CreateContainer within sandbox \"e598de7a9e31bd120ac2ebcf4f1697d8214b7f7ba3b4be404742658727bc4262\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3b468afb21e8f37ef33c0ac533a5e06bb15952155800adbf2ade99bb96499a45\"" Jul 6 23:59:03.966883 containerd[1511]: time="2025-07-06T23:59:03.966748751Z" level=info msg="StartContainer for \"3b468afb21e8f37ef33c0ac533a5e06bb15952155800adbf2ade99bb96499a45\"" Jul 6 23:59:04.135560 systemd[1]: Started cri-containerd-3b468afb21e8f37ef33c0ac533a5e06bb15952155800adbf2ade99bb96499a45.scope - libcontainer container 3b468afb21e8f37ef33c0ac533a5e06bb15952155800adbf2ade99bb96499a45. Jul 6 23:59:04.163075 containerd[1511]: time="2025-07-06T23:59:04.162968298Z" level=info msg="StartContainer for \"3b468afb21e8f37ef33c0ac533a5e06bb15952155800adbf2ade99bb96499a45\" returns successfully" Jul 6 23:59:04.322277 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:59:04.323540 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:59:04.524428 containerd[1511]: time="2025-07-06T23:59:04.524272160Z" level=info msg="StopPodSandbox for \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\"" Jul 6 23:59:04.672436 kubelet[2716]: I0706 23:59:04.657075 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vrxs4" podStartSLOduration=1.993654538 podStartE2EDuration="19.638264595s" podCreationTimestamp="2025-07-06 23:58:45 +0000 UTC" firstStartedPulling="2025-07-06 23:58:46.228306318 +0000 UTC m=+19.061486393" lastFinishedPulling="2025-07-06 23:59:03.872916375 +0000 UTC m=+36.706096450" observedRunningTime="2025-07-06 23:59:04.60232373 +0000 UTC m=+37.435503825" watchObservedRunningTime="2025-07-06 23:59:04.638264595 +0000 UTC m=+37.471444671" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.626 [INFO][3953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.630 [INFO][3953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" iface="eth0" netns="/var/run/netns/cni-d7e6415d-5413-f577-02f1-10f175de1dc4" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.632 [INFO][3953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" iface="eth0" netns="/var/run/netns/cni-d7e6415d-5413-f577-02f1-10f175de1dc4" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.635 [INFO][3953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" iface="eth0" netns="/var/run/netns/cni-d7e6415d-5413-f577-02f1-10f175de1dc4" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.635 [INFO][3953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.637 [INFO][3953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.829 [INFO][3961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.832 [INFO][3961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.832 [INFO][3961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.841 [WARNING][3961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.841 [INFO][3961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.843 [INFO][3961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:04.846612 containerd[1511]: 2025-07-06 23:59:04.844 [INFO][3953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:04.848989 systemd[1]: run-netns-cni\x2dd7e6415d\x2d5413\x2df577\x2d02f1\x2d10f175de1dc4.mount: Deactivated successfully. Jul 6 23:59:04.852679 containerd[1511]: time="2025-07-06T23:59:04.852645387Z" level=info msg="TearDown network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\" successfully" Jul 6 23:59:04.852743 containerd[1511]: time="2025-07-06T23:59:04.852682366Z" level=info msg="StopPodSandbox for \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\" returns successfully" Jul 6 23:59:04.972448 kubelet[2716]: I0706 23:59:04.972026 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-backend-key-pair\") pod \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\" (UID: \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\") " Jul 6 23:59:04.972448 kubelet[2716]: I0706 23:59:04.972102 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-ca-bundle\") pod \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\" (UID: \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\") " Jul 6 23:59:04.972448 kubelet[2716]: I0706 23:59:04.972160 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4pn6\" (UniqueName: \"kubernetes.io/projected/1cc81f56-9823-43af-b0f7-cbcd23fb8712-kube-api-access-p4pn6\") pod \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\" (UID: \"1cc81f56-9823-43af-b0f7-cbcd23fb8712\") " Jul 6 23:59:04.982454 kubelet[2716]: I0706 23:59:04.981792 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1cc81f56-9823-43af-b0f7-cbcd23fb8712" (UID: "1cc81f56-9823-43af-b0f7-cbcd23fb8712"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:59:04.988601 kubelet[2716]: I0706 23:59:04.988551 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc81f56-9823-43af-b0f7-cbcd23fb8712-kube-api-access-p4pn6" (OuterVolumeSpecName: "kube-api-access-p4pn6") pod "1cc81f56-9823-43af-b0f7-cbcd23fb8712" (UID: "1cc81f56-9823-43af-b0f7-cbcd23fb8712"). InnerVolumeSpecName "kube-api-access-p4pn6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:59:04.988574 systemd[1]: var-lib-kubelet-pods-1cc81f56\x2d9823\x2d43af\x2db0f7\x2dcbcd23fb8712-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:59:04.988862 kubelet[2716]: I0706 23:59:04.988831 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1cc81f56-9823-43af-b0f7-cbcd23fb8712" (UID: "1cc81f56-9823-43af-b0f7-cbcd23fb8712"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:59:04.992339 systemd[1]: var-lib-kubelet-pods-1cc81f56\x2d9823\x2d43af\x2db0f7\x2dcbcd23fb8712-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4pn6.mount: Deactivated successfully. Jul 6 23:59:05.073154 kubelet[2716]: I0706 23:59:05.073105 2716 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p4pn6\" (UniqueName: \"kubernetes.io/projected/1cc81f56-9823-43af-b0f7-cbcd23fb8712-kube-api-access-p4pn6\") on node \"ci-4081-3-4-2-e8b158d58b\" DevicePath \"\"" Jul 6 23:59:05.073154 kubelet[2716]: I0706 23:59:05.073143 2716 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-backend-key-pair\") on node \"ci-4081-3-4-2-e8b158d58b\" DevicePath \"\"" Jul 6 23:59:05.073154 kubelet[2716]: I0706 23:59:05.073155 2716 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cc81f56-9823-43af-b0f7-cbcd23fb8712-whisker-ca-bundle\") on node \"ci-4081-3-4-2-e8b158d58b\" DevicePath \"\"" Jul 6 23:59:05.291354 systemd[1]: Removed slice kubepods-besteffort-pod1cc81f56_9823_43af_b0f7_cbcd23fb8712.slice - libcontainer container kubepods-besteffort-pod1cc81f56_9823_43af_b0f7_cbcd23fb8712.slice. Jul 6 23:59:05.743131 systemd[1]: Created slice kubepods-besteffort-pod221586a2_400a_4766_b395_7a5d958c9730.slice - libcontainer container kubepods-besteffort-pod221586a2_400a_4766_b395_7a5d958c9730.slice. Jul 6 23:59:05.748435 kubelet[2716]: I0706 23:59:05.748212 2716 status_manager.go:895] "Failed to get status for pod" podUID="221586a2-400a-4766-b395-7a5d958c9730" pod="calico-system/whisker-8f89ccf99-9gx9x" err="pods \"whisker-8f89ccf99-9gx9x\" is forbidden: User \"system:node:ci-4081-3-4-2-e8b158d58b\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-4-2-e8b158d58b' and this object" Jul 6 23:59:05.758978 kubelet[2716]: E0706 23:59:05.751800 2716 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081-3-4-2-e8b158d58b\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-4-2-e8b158d58b' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-backend-key-pair\"" type="*v1.Secret" Jul 6 23:59:05.758978 kubelet[2716]: E0706 23:59:05.751301 2716 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-4-2-e8b158d58b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-4-2-e8b158d58b' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-ca-bundle\"" type="*v1.ConfigMap" Jul 6 23:59:05.894535 kubelet[2716]: I0706 23:59:05.894479 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221586a2-400a-4766-b395-7a5d958c9730-whisker-ca-bundle\") pod \"whisker-8f89ccf99-9gx9x\" (UID: \"221586a2-400a-4766-b395-7a5d958c9730\") " pod="calico-system/whisker-8f89ccf99-9gx9x" Jul 6 23:59:05.894535 kubelet[2716]: I0706 23:59:05.894516 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42qbd\" (UniqueName: \"kubernetes.io/projected/221586a2-400a-4766-b395-7a5d958c9730-kube-api-access-42qbd\") pod \"whisker-8f89ccf99-9gx9x\" (UID: \"221586a2-400a-4766-b395-7a5d958c9730\") " pod="calico-system/whisker-8f89ccf99-9gx9x" Jul 6 23:59:05.894535 kubelet[2716]: I0706 23:59:05.894549 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/221586a2-400a-4766-b395-7a5d958c9730-whisker-backend-key-pair\") pod \"whisker-8f89ccf99-9gx9x\" (UID: \"221586a2-400a-4766-b395-7a5d958c9730\") " pod="calico-system/whisker-8f89ccf99-9gx9x" Jul 6 23:59:06.157447 kernel: bpftool[4148]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:59:06.354505 systemd-networkd[1398]: vxlan.calico: Link UP Jul 6 23:59:06.354516 systemd-networkd[1398]: vxlan.calico: Gained carrier Jul 6 23:59:06.997685 kubelet[2716]: E0706 23:59:06.997531 2716 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:59:06.998963 kubelet[2716]: E0706 23:59:06.998913 2716 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 6 23:59:07.007020 kubelet[2716]: E0706 23:59:07.006972 2716 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/221586a2-400a-4766-b395-7a5d958c9730-whisker-ca-bundle podName:221586a2-400a-4766-b395-7a5d958c9730 nodeName:}" failed. No retries permitted until 2025-07-06 23:59:07.497618205 +0000 UTC m=+40.330798290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/221586a2-400a-4766-b395-7a5d958c9730-whisker-ca-bundle") pod "whisker-8f89ccf99-9gx9x" (UID: "221586a2-400a-4766-b395-7a5d958c9730") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:59:07.007256 kubelet[2716]: E0706 23:59:07.007032 2716 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/221586a2-400a-4766-b395-7a5d958c9730-whisker-backend-key-pair podName:221586a2-400a-4766-b395-7a5d958c9730 nodeName:}" failed. No retries permitted until 2025-07-06 23:59:07.507013859 +0000 UTC m=+40.340193944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/221586a2-400a-4766-b395-7a5d958c9730-whisker-backend-key-pair") pod "whisker-8f89ccf99-9gx9x" (UID: "221586a2-400a-4766-b395-7a5d958c9730") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:59:07.272372 kubelet[2716]: I0706 23:59:07.272250 2716 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc81f56-9823-43af-b0f7-cbcd23fb8712" path="/var/lib/kubelet/pods/1cc81f56-9823-43af-b0f7-cbcd23fb8712/volumes" Jul 6 23:59:07.380574 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Jul 6 23:59:07.546597 containerd[1511]: time="2025-07-06T23:59:07.546452279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f89ccf99-9gx9x,Uid:221586a2-400a-4766-b395-7a5d958c9730,Namespace:calico-system,Attempt:0,}" Jul 6 23:59:07.672792 systemd-networkd[1398]: calibec213a248c: Link UP Jul 6 23:59:07.675334 systemd-networkd[1398]: calibec213a248c: Gained carrier Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.598 [INFO][4240] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0 whisker-8f89ccf99- calico-system 221586a2-400a-4766-b395-7a5d958c9730 897 0 2025-07-06 23:59:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8f89ccf99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b whisker-8f89ccf99-9gx9x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibec213a248c [] [] }} ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.598 [INFO][4240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.622 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" HandleID="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.622 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" HandleID="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"whisker-8f89ccf99-9gx9x", "timestamp":"2025-07-06 23:59:07.62235692 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.622 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.622 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.622 [INFO][4253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.632 [INFO][4253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.641 [INFO][4253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.647 [INFO][4253] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.649 [INFO][4253] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.651 [INFO][4253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.651 [INFO][4253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.653 [INFO][4253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00 Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.658 [INFO][4253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.664 [INFO][4253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.193/26] block=192.168.14.192/26 handle="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.664 [INFO][4253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.193/26] handle="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.664 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:07.691895 containerd[1511]: 2025-07-06 23:59:07.664 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.193/26] IPv6=[] ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" HandleID="k8s-pod-network.56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" Jul 6 23:59:07.693745 containerd[1511]: 2025-07-06 23:59:07.667 [INFO][4240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0", GenerateName:"whisker-8f89ccf99-", Namespace:"calico-system", SelfLink:"", UID:"221586a2-400a-4766-b395-7a5d958c9730", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f89ccf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"whisker-8f89ccf99-9gx9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibec213a248c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:07.693745 containerd[1511]: 2025-07-06 23:59:07.667 [INFO][4240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.193/32] ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" Jul 6 23:59:07.693745 containerd[1511]: 2025-07-06 23:59:07.667 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibec213a248c ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" Jul 6 23:59:07.693745 containerd[1511]: 2025-07-06 23:59:07.676 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" Jul 6 23:59:07.693745 containerd[1511]: 2025-07-06 23:59:07.677 [INFO][4240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0", GenerateName:"whisker-8f89ccf99-", Namespace:"calico-system", SelfLink:"", UID:"221586a2-400a-4766-b395-7a5d958c9730", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f89ccf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00", Pod:"whisker-8f89ccf99-9gx9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibec213a248c", MAC:"92:49:1c:52:4c:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:07.693745 containerd[1511]: 2025-07-06 23:59:07.687 [INFO][4240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00" Namespace="calico-system" Pod="whisker-8f89ccf99-9gx9x" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--8f89ccf99--9gx9x-eth0" Jul 6 23:59:07.716720 containerd[1511]: time="2025-07-06T23:59:07.716599918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:07.718616 containerd[1511]: time="2025-07-06T23:59:07.716862672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:07.718738 containerd[1511]: time="2025-07-06T23:59:07.718618934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:07.718785 containerd[1511]: time="2025-07-06T23:59:07.718730043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:07.746661 systemd[1]: Started cri-containerd-56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00.scope - libcontainer container 56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00. Jul 6 23:59:07.810054 containerd[1511]: time="2025-07-06T23:59:07.809892118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f89ccf99-9gx9x,Uid:221586a2-400a-4766-b395-7a5d958c9730,Namespace:calico-system,Attempt:0,} returns sandbox id \"56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00\"" Jul 6 23:59:07.811964 containerd[1511]: time="2025-07-06T23:59:07.811928277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:59:08.917059 systemd-networkd[1398]: calibec213a248c: Gained IPv6LL Jul 6 23:59:09.276404 containerd[1511]: time="2025-07-06T23:59:09.275825613Z" level=info msg="StopPodSandbox for \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\"" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.333 [INFO][4322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.333 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" iface="eth0" netns="/var/run/netns/cni-a9257e0f-cf8e-a465-5fa4-72b3a25f0704" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.334 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" iface="eth0" netns="/var/run/netns/cni-a9257e0f-cf8e-a465-5fa4-72b3a25f0704" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.334 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" iface="eth0" netns="/var/run/netns/cni-a9257e0f-cf8e-a465-5fa4-72b3a25f0704" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.334 [INFO][4322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.334 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.352 [INFO][4334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.352 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.352 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.362 [WARNING][4334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.362 [INFO][4334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.363 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:09.368533 containerd[1511]: 2025-07-06 23:59:09.365 [INFO][4322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:09.368885 containerd[1511]: time="2025-07-06T23:59:09.368688999Z" level=info msg="TearDown network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\" successfully" Jul 6 23:59:09.368885 containerd[1511]: time="2025-07-06T23:59:09.368713776Z" level=info msg="StopPodSandbox for \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\" returns successfully" Jul 6 23:59:09.371838 containerd[1511]: time="2025-07-06T23:59:09.371768407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-985fd5996-856v9,Uid:3959008a-f171-4a04-98c6-424b260ddb24,Namespace:calico-system,Attempt:1,}" Jul 6 23:59:09.373131 systemd[1]: run-netns-cni\x2da9257e0f\x2dcf8e\x2da465\x2d5fa4\x2d72b3a25f0704.mount: Deactivated successfully. Jul 6 23:59:09.419163 containerd[1511]: time="2025-07-06T23:59:09.419089818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:09.420443 containerd[1511]: time="2025-07-06T23:59:09.420386896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:59:09.421434 containerd[1511]: time="2025-07-06T23:59:09.421342874Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:09.424581 containerd[1511]: time="2025-07-06T23:59:09.424452079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:09.426566 containerd[1511]: time="2025-07-06T23:59:09.426540705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.614565561s" Jul 6 23:59:09.426620 containerd[1511]: time="2025-07-06T23:59:09.426571734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:59:09.432047 containerd[1511]: time="2025-07-06T23:59:09.432026638Z" level=info msg="CreateContainer within sandbox \"56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:59:09.448356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873640745.mount: Deactivated successfully. Jul 6 23:59:09.450287 containerd[1511]: time="2025-07-06T23:59:09.450247060Z" level=info msg="CreateContainer within sandbox \"56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c287f9cf4543fd7b5cce688559abf4339e5c50391fbdda2ca7915db8e4458b1b\"" Jul 6 23:59:09.451494 containerd[1511]: time="2025-07-06T23:59:09.451472866Z" level=info msg="StartContainer for \"c287f9cf4543fd7b5cce688559abf4339e5c50391fbdda2ca7915db8e4458b1b\"" Jul 6 23:59:09.478518 systemd[1]: Started cri-containerd-c287f9cf4543fd7b5cce688559abf4339e5c50391fbdda2ca7915db8e4458b1b.scope - libcontainer container c287f9cf4543fd7b5cce688559abf4339e5c50391fbdda2ca7915db8e4458b1b. Jul 6 23:59:09.504362 systemd-networkd[1398]: cali80538ed037e: Link UP Jul 6 23:59:09.505493 systemd-networkd[1398]: cali80538ed037e: Gained carrier Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.420 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0 calico-kube-controllers-985fd5996- calico-system 3959008a-f171-4a04-98c6-424b260ddb24 907 0 2025-07-06 23:58:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:985fd5996 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b calico-kube-controllers-985fd5996-856v9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali80538ed037e [] [] }} ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.420 [INFO][4341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.453 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" HandleID="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.454 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" HandleID="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"calico-kube-controllers-985fd5996-856v9", "timestamp":"2025-07-06 23:59:09.45383703 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.454 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.454 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.454 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.467 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.474 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.478 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.480 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.481 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.482 [INFO][4352] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.483 [INFO][4352] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01 Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.487 [INFO][4352] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.497 [INFO][4352] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.194/26] block=192.168.14.192/26 handle="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.497 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.194/26] handle="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.497 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:09.533827 containerd[1511]: 2025-07-06 23:59:09.497 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.194/26] IPv6=[] ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" HandleID="k8s-pod-network.20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.536655 containerd[1511]: 2025-07-06 23:59:09.500 [INFO][4341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0", GenerateName:"calico-kube-controllers-985fd5996-", Namespace:"calico-system", SelfLink:"", UID:"3959008a-f171-4a04-98c6-424b260ddb24", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"985fd5996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"calico-kube-controllers-985fd5996-856v9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80538ed037e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:09.536655 containerd[1511]: 2025-07-06 23:59:09.500 [INFO][4341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.194/32] ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.536655 containerd[1511]: 2025-07-06 23:59:09.500 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80538ed037e ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.536655 containerd[1511]: 2025-07-06 23:59:09.508 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.536655 containerd[1511]: 2025-07-06 23:59:09.508 [INFO][4341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0", GenerateName:"calico-kube-controllers-985fd5996-", Namespace:"calico-system", SelfLink:"", UID:"3959008a-f171-4a04-98c6-424b260ddb24", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"985fd5996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01", Pod:"calico-kube-controllers-985fd5996-856v9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80538ed037e", MAC:"7e:39:08:5a:82:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:09.536655 containerd[1511]: 2025-07-06 23:59:09.527 [INFO][4341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01" Namespace="calico-system" Pod="calico-kube-controllers-985fd5996-856v9" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:09.536655 containerd[1511]: time="2025-07-06T23:59:09.534633550Z" level=info msg="StartContainer for \"c287f9cf4543fd7b5cce688559abf4339e5c50391fbdda2ca7915db8e4458b1b\" returns successfully" Jul 6 23:59:09.542085 containerd[1511]: time="2025-07-06T23:59:09.542057918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:59:09.558099 containerd[1511]: time="2025-07-06T23:59:09.557887485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:09.558099 containerd[1511]: time="2025-07-06T23:59:09.557935316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:09.558099 containerd[1511]: time="2025-07-06T23:59:09.557944523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:09.558099 containerd[1511]: time="2025-07-06T23:59:09.558010046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:09.575591 systemd[1]: Started cri-containerd-20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01.scope - libcontainer container 20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01. Jul 6 23:59:09.608697 containerd[1511]: time="2025-07-06T23:59:09.608646658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-985fd5996-856v9,Uid:3959008a-f171-4a04-98c6-424b260ddb24,Namespace:calico-system,Attempt:1,} returns sandbox id \"20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01\"" Jul 6 23:59:10.270051 containerd[1511]: time="2025-07-06T23:59:10.268310888Z" level=info msg="StopPodSandbox for \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\"" Jul 6 23:59:10.270051 containerd[1511]: time="2025-07-06T23:59:10.268355913Z" level=info msg="StopPodSandbox for \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\"" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.326 [INFO][4465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.326 [INFO][4465] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" iface="eth0" netns="/var/run/netns/cni-89a6fe87-30f6-4348-942a-628fbbf8be46" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.327 [INFO][4465] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" iface="eth0" netns="/var/run/netns/cni-89a6fe87-30f6-4348-942a-628fbbf8be46" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.328 [INFO][4465] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" iface="eth0" netns="/var/run/netns/cni-89a6fe87-30f6-4348-942a-628fbbf8be46" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.328 [INFO][4465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.328 [INFO][4465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.352 [INFO][4477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.353 [INFO][4477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.353 [INFO][4477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.358 [WARNING][4477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.358 [INFO][4477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.360 [INFO][4477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:10.366748 containerd[1511]: 2025-07-06 23:59:10.362 [INFO][4465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:10.366748 containerd[1511]: time="2025-07-06T23:59:10.366515389Z" level=info msg="TearDown network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\" successfully" Jul 6 23:59:10.366748 containerd[1511]: time="2025-07-06T23:59:10.366542600Z" level=info msg="StopPodSandbox for \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\" returns successfully" Jul 6 23:59:10.368896 containerd[1511]: time="2025-07-06T23:59:10.367642237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-kx7lz,Uid:7f00b71d-ed32-42ca-b9e1-292600c3ce63,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:59:10.375812 systemd[1]: run-netns-cni\x2d89a6fe87\x2d30f6\x2d4348\x2d942a\x2d628fbbf8be46.mount: Deactivated successfully. Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.330 [INFO][4460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.331 [INFO][4460] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" iface="eth0" netns="/var/run/netns/cni-dd088a50-c33e-66e8-6c3b-ec158fdf09f3" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.331 [INFO][4460] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" iface="eth0" netns="/var/run/netns/cni-dd088a50-c33e-66e8-6c3b-ec158fdf09f3" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.331 [INFO][4460] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" iface="eth0" netns="/var/run/netns/cni-dd088a50-c33e-66e8-6c3b-ec158fdf09f3" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.331 [INFO][4460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.331 [INFO][4460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.362 [INFO][4479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.363 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.364 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.371 [WARNING][4479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.371 [INFO][4479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.373 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:10.388990 containerd[1511]: 2025-07-06 23:59:10.382 [INFO][4460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:10.392013 containerd[1511]: time="2025-07-06T23:59:10.389097391Z" level=info msg="TearDown network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\" successfully" Jul 6 23:59:10.392013 containerd[1511]: time="2025-07-06T23:59:10.389119492Z" level=info msg="StopPodSandbox for \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\" returns successfully" Jul 6 23:59:10.392013 containerd[1511]: time="2025-07-06T23:59:10.390950685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szql8,Uid:f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4,Namespace:kube-system,Attempt:1,}" Jul 6 23:59:10.395122 systemd[1]: run-netns-cni\x2ddd088a50\x2dc33e\x2d66e8\x2d6c3b\x2dec158fdf09f3.mount: Deactivated successfully. Jul 6 23:59:10.509042 systemd-networkd[1398]: cali205b4656c7e: Link UP Jul 6 23:59:10.510176 systemd-networkd[1398]: cali205b4656c7e: Gained carrier Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.438 [INFO][4491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0 calico-apiserver-575b6688c4- calico-apiserver 7f00b71d-ed32-42ca-b9e1-292600c3ce63 920 0 2025-07-06 23:58:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575b6688c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b calico-apiserver-575b6688c4-kx7lz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali205b4656c7e [] [] }} ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.439 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.467 [INFO][4515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" HandleID="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.468 [INFO][4515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" HandleID="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"calico-apiserver-575b6688c4-kx7lz", "timestamp":"2025-07-06 23:59:10.467842662 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.468 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.468 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.468 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.475 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.481 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.486 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.488 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.490 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.490 [INFO][4515] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.491 [INFO][4515] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.495 [INFO][4515] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.501 [INFO][4515] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.195/26] block=192.168.14.192/26 handle="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.501 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.195/26] handle="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.501 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:10.528099 containerd[1511]: 2025-07-06 23:59:10.501 [INFO][4515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.195/26] IPv6=[] ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" HandleID="k8s-pod-network.1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.529646 containerd[1511]: 2025-07-06 23:59:10.505 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f00b71d-ed32-42ca-b9e1-292600c3ce63", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"calico-apiserver-575b6688c4-kx7lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali205b4656c7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:10.529646 containerd[1511]: 2025-07-06 23:59:10.505 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.195/32] ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.529646 containerd[1511]: 2025-07-06 23:59:10.505 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali205b4656c7e ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.529646 containerd[1511]: 2025-07-06 23:59:10.510 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.529646 containerd[1511]: 2025-07-06 23:59:10.511 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f00b71d-ed32-42ca-b9e1-292600c3ce63", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc", Pod:"calico-apiserver-575b6688c4-kx7lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali205b4656c7e", MAC:"5a:33:ef:ff:00:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:10.529646 containerd[1511]: 2025-07-06 23:59:10.523 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-kx7lz" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:10.549911 containerd[1511]: time="2025-07-06T23:59:10.549805260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:10.550041 containerd[1511]: time="2025-07-06T23:59:10.549943299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:10.550041 containerd[1511]: time="2025-07-06T23:59:10.550018211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:10.550837 containerd[1511]: time="2025-07-06T23:59:10.550649006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:10.574579 systemd[1]: Started cri-containerd-1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc.scope - libcontainer container 1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc. Jul 6 23:59:10.617186 systemd-networkd[1398]: cali199bb77e4b6: Link UP Jul 6 23:59:10.617639 systemd-networkd[1398]: cali199bb77e4b6: Gained carrier Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.449 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0 coredns-674b8bbfcf- kube-system f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4 921 0 2025-07-06 23:58:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b coredns-674b8bbfcf-szql8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali199bb77e4b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.449 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.477 [INFO][4520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" HandleID="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.477 [INFO][4520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" HandleID="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"coredns-674b8bbfcf-szql8", "timestamp":"2025-07-06 23:59:10.477274512 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.477 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.501 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.501 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.578 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.587 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.592 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.595 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.598 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.599 [INFO][4520] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.600 [INFO][4520] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3 Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.605 [INFO][4520] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.611 [INFO][4520] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.196/26] block=192.168.14.192/26 handle="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.611 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.196/26] handle="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.611 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:10.637445 containerd[1511]: 2025-07-06 23:59:10.611 [INFO][4520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.196/26] IPv6=[] ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" HandleID="k8s-pod-network.f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.638649 containerd[1511]: 2025-07-06 23:59:10.614 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"coredns-674b8bbfcf-szql8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali199bb77e4b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:10.638649 containerd[1511]: 2025-07-06 23:59:10.614 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.196/32] ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.638649 containerd[1511]: 2025-07-06 23:59:10.614 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali199bb77e4b6 ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.638649 containerd[1511]: 2025-07-06 23:59:10.618 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.638649 containerd[1511]: 2025-07-06 23:59:10.618 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3", Pod:"coredns-674b8bbfcf-szql8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali199bb77e4b6", MAC:"8a:1c:72:46:94:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:10.638649 containerd[1511]: 2025-07-06 23:59:10.631 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3" Namespace="kube-system" Pod="coredns-674b8bbfcf-szql8" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:10.652349 containerd[1511]: time="2025-07-06T23:59:10.652197945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-kx7lz,Uid:7f00b71d-ed32-42ca-b9e1-292600c3ce63,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc\"" Jul 6 23:59:10.662395 containerd[1511]: time="2025-07-06T23:59:10.662052029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:10.662395 containerd[1511]: time="2025-07-06T23:59:10.662113305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:10.662395 containerd[1511]: time="2025-07-06T23:59:10.662127532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:10.662395 containerd[1511]: time="2025-07-06T23:59:10.662209014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:10.680564 systemd[1]: Started cri-containerd-f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3.scope - libcontainer container f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3. Jul 6 23:59:10.715255 containerd[1511]: time="2025-07-06T23:59:10.715218205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szql8,Uid:f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4,Namespace:kube-system,Attempt:1,} returns sandbox id \"f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3\"" Jul 6 23:59:10.721367 containerd[1511]: time="2025-07-06T23:59:10.721318122Z" level=info msg="CreateContainer within sandbox \"f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:59:10.746331 containerd[1511]: time="2025-07-06T23:59:10.746278774Z" level=info msg="CreateContainer within sandbox \"f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b644945fd40707009451dde01c5cfa16da33be40d57e7c6e9c567da2ca588ec2\"" Jul 6 23:59:10.747544 containerd[1511]: time="2025-07-06T23:59:10.746915522Z" level=info msg="StartContainer for \"b644945fd40707009451dde01c5cfa16da33be40d57e7c6e9c567da2ca588ec2\"" Jul 6 23:59:10.767566 systemd[1]: Started cri-containerd-b644945fd40707009451dde01c5cfa16da33be40d57e7c6e9c567da2ca588ec2.scope - libcontainer container b644945fd40707009451dde01c5cfa16da33be40d57e7c6e9c567da2ca588ec2. Jul 6 23:59:10.795159 containerd[1511]: time="2025-07-06T23:59:10.795047333Z" level=info msg="StartContainer for \"b644945fd40707009451dde01c5cfa16da33be40d57e7c6e9c567da2ca588ec2\" returns successfully" Jul 6 23:59:10.965195 systemd-networkd[1398]: cali80538ed037e: Gained IPv6LL Jul 6 23:59:11.268555 containerd[1511]: time="2025-07-06T23:59:11.268102463Z" level=info msg="StopPodSandbox for \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\"" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.310 [INFO][4671] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.310 [INFO][4671] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" iface="eth0" netns="/var/run/netns/cni-7396552f-b97f-52a0-16c7-4fd6c1851236" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.311 [INFO][4671] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" iface="eth0" netns="/var/run/netns/cni-7396552f-b97f-52a0-16c7-4fd6c1851236" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.311 [INFO][4671] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" iface="eth0" netns="/var/run/netns/cni-7396552f-b97f-52a0-16c7-4fd6c1851236" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.311 [INFO][4671] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.311 [INFO][4671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.341 [INFO][4678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.341 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.341 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.349 [WARNING][4678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.350 [INFO][4678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.351 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:11.354887 containerd[1511]: 2025-07-06 23:59:11.353 [INFO][4671] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:11.359065 containerd[1511]: time="2025-07-06T23:59:11.355006597Z" level=info msg="TearDown network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\" successfully" Jul 6 23:59:11.359065 containerd[1511]: time="2025-07-06T23:59:11.355031774Z" level=info msg="StopPodSandbox for \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\" returns successfully" Jul 6 23:59:11.359065 containerd[1511]: time="2025-07-06T23:59:11.355880239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-pbb6c,Uid:ca3bb848-e578-4ffb-b363-75e2789c4189,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:59:11.375308 systemd[1]: run-netns-cni\x2d7396552f\x2db97f\x2d52a0\x2d16c7\x2d4fd6c1851236.mount: Deactivated successfully. Jul 6 23:59:11.478618 systemd-networkd[1398]: cali6e22392f6d3: Link UP Jul 6 23:59:11.480567 systemd-networkd[1398]: cali6e22392f6d3: Gained carrier Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.411 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0 calico-apiserver-575b6688c4- calico-apiserver ca3bb848-e578-4ffb-b363-75e2789c4189 935 0 2025-07-06 23:58:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:575b6688c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b calico-apiserver-575b6688c4-pbb6c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e22392f6d3 [] [] }} ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.411 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.436 [INFO][4697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" HandleID="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.436 [INFO][4697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" HandleID="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"calico-apiserver-575b6688c4-pbb6c", "timestamp":"2025-07-06 23:59:11.436041247 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.436 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.436 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.436 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.445 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.451 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.456 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.458 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.460 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.460 [INFO][4697] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.461 [INFO][4697] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8 Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.467 [INFO][4697] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.474 [INFO][4697] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.197/26] block=192.168.14.192/26 handle="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.474 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.197/26] handle="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.474 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:11.501156 containerd[1511]: 2025-07-06 23:59:11.474 [INFO][4697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.197/26] IPv6=[] ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" HandleID="k8s-pod-network.b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.503380 containerd[1511]: 2025-07-06 23:59:11.476 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca3bb848-e578-4ffb-b363-75e2789c4189", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"calico-apiserver-575b6688c4-pbb6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e22392f6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:11.503380 containerd[1511]: 2025-07-06 23:59:11.476 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.197/32] ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.503380 containerd[1511]: 2025-07-06 23:59:11.476 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e22392f6d3 ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.503380 containerd[1511]: 2025-07-06 23:59:11.479 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.503380 containerd[1511]: 2025-07-06 23:59:11.479 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca3bb848-e578-4ffb-b363-75e2789c4189", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8", Pod:"calico-apiserver-575b6688c4-pbb6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e22392f6d3", MAC:"ee:24:4a:35:4b:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:11.503380 containerd[1511]: 2025-07-06 23:59:11.498 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8" Namespace="calico-apiserver" Pod="calico-apiserver-575b6688c4-pbb6c" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:11.525334 containerd[1511]: time="2025-07-06T23:59:11.524808034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:11.525334 containerd[1511]: time="2025-07-06T23:59:11.524853939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:11.525334 containerd[1511]: time="2025-07-06T23:59:11.524865982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:11.525334 containerd[1511]: time="2025-07-06T23:59:11.524959648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:11.549578 systemd[1]: Started cri-containerd-b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8.scope - libcontainer container b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8. Jul 6 23:59:11.596082 containerd[1511]: time="2025-07-06T23:59:11.596039866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-575b6688c4-pbb6c,Uid:ca3bb848-e578-4ffb-b363-75e2789c4189,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8\"" Jul 6 23:59:11.639496 kubelet[2716]: I0706 23:59:11.639441 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-szql8" podStartSLOduration=37.639426355 podStartE2EDuration="37.639426355s" podCreationTimestamp="2025-07-06 23:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:11.636053395 +0000 UTC m=+44.469233489" watchObservedRunningTime="2025-07-06 23:59:11.639426355 +0000 UTC m=+44.472606430" Jul 6 23:59:11.797161 systemd-networkd[1398]: cali199bb77e4b6: Gained IPv6LL Jul 6 23:59:12.244964 systemd-networkd[1398]: cali205b4656c7e: Gained IPv6LL Jul 6 23:59:12.268711 containerd[1511]: time="2025-07-06T23:59:12.267619777Z" level=info msg="StopPodSandbox for \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\"" Jul 6 23:59:12.268711 containerd[1511]: time="2025-07-06T23:59:12.267719864Z" level=info msg="StopPodSandbox for \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\"" Jul 6 23:59:12.268711 containerd[1511]: time="2025-07-06T23:59:12.268094990Z" level=info msg="StopPodSandbox for \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\"" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.386 [INFO][4791] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.386 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" iface="eth0" netns="/var/run/netns/cni-68cf6bfc-151f-e111-34c9-02e933957a31" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.387 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" iface="eth0" netns="/var/run/netns/cni-68cf6bfc-151f-e111-34c9-02e933957a31" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.389 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" iface="eth0" netns="/var/run/netns/cni-68cf6bfc-151f-e111-34c9-02e933957a31" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.389 [INFO][4791] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.389 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.436 [INFO][4821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.436 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.437 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.446 [WARNING][4821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.446 [INFO][4821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.448 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:12.453640 containerd[1511]: 2025-07-06 23:59:12.451 [INFO][4791] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:12.457070 containerd[1511]: time="2025-07-06T23:59:12.454506592Z" level=info msg="TearDown network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\" successfully" Jul 6 23:59:12.457070 containerd[1511]: time="2025-07-06T23:59:12.454555954Z" level=info msg="StopPodSandbox for \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\" returns successfully" Jul 6 23:59:12.457070 containerd[1511]: time="2025-07-06T23:59:12.455621508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2k7qf,Uid:0ca62d42-186f-4f68-9ed0-588257419b27,Namespace:calico-system,Attempt:1,}" Jul 6 23:59:12.460223 systemd[1]: run-netns-cni\x2d68cf6bfc\x2d151f\x2de111\x2d34c9\x2d02e933957a31.mount: Deactivated successfully. Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.369 [INFO][4795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.369 [INFO][4795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" iface="eth0" netns="/var/run/netns/cni-9772f5fc-546a-26cb-ae75-fe065625a204" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.370 [INFO][4795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" iface="eth0" netns="/var/run/netns/cni-9772f5fc-546a-26cb-ae75-fe065625a204" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.371 [INFO][4795] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" iface="eth0" netns="/var/run/netns/cni-9772f5fc-546a-26cb-ae75-fe065625a204" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.371 [INFO][4795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.371 [INFO][4795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.448 [INFO][4814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.449 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.449 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.457 [WARNING][4814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.457 [INFO][4814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.462 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:12.467262 containerd[1511]: 2025-07-06 23:59:12.464 [INFO][4795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:12.469911 containerd[1511]: time="2025-07-06T23:59:12.468680597Z" level=info msg="TearDown network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\" successfully" Jul 6 23:59:12.469911 containerd[1511]: time="2025-07-06T23:59:12.468704572Z" level=info msg="StopPodSandbox for \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\" returns successfully" Jul 6 23:59:12.470622 containerd[1511]: time="2025-07-06T23:59:12.470597650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-966nl,Uid:51e7e008-60ac-4100-81fe-67be6774ad5f,Namespace:calico-system,Attempt:1,}" Jul 6 23:59:12.472038 systemd[1]: run-netns-cni\x2d9772f5fc\x2d546a\x2d26cb\x2dae75\x2dfe065625a204.mount: Deactivated successfully. Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.387 [INFO][4796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.387 [INFO][4796] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" iface="eth0" netns="/var/run/netns/cni-46e8050c-aa50-2561-a075-3325bff5f9bd" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.388 [INFO][4796] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" iface="eth0" netns="/var/run/netns/cni-46e8050c-aa50-2561-a075-3325bff5f9bd" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.388 [INFO][4796] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" iface="eth0" netns="/var/run/netns/cni-46e8050c-aa50-2561-a075-3325bff5f9bd" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.388 [INFO][4796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.388 [INFO][4796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.448 [INFO][4819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.449 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.462 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.476 [WARNING][4819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.478 [INFO][4819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.480 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:12.491821 containerd[1511]: 2025-07-06 23:59:12.487 [INFO][4796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:12.494539 containerd[1511]: time="2025-07-06T23:59:12.492481516Z" level=info msg="TearDown network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\" successfully" Jul 6 23:59:12.495523 systemd[1]: run-netns-cni\x2d46e8050c\x2daa50\x2d2561\x2da075\x2d3325bff5f9bd.mount: Deactivated successfully. Jul 6 23:59:12.497718 containerd[1511]: time="2025-07-06T23:59:12.492513988Z" level=info msg="StopPodSandbox for \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\" returns successfully" Jul 6 23:59:12.500468 containerd[1511]: time="2025-07-06T23:59:12.499902617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x79tg,Uid:db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7,Namespace:kube-system,Attempt:1,}" Jul 6 23:59:12.501478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804130171.mount: Deactivated successfully. Jul 6 23:59:12.531002 containerd[1511]: time="2025-07-06T23:59:12.530956792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:12.539195 containerd[1511]: time="2025-07-06T23:59:12.538583408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:59:12.539789 containerd[1511]: time="2025-07-06T23:59:12.539735133Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:12.545298 containerd[1511]: time="2025-07-06T23:59:12.545263375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:12.546627 containerd[1511]: time="2025-07-06T23:59:12.546500991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.004411063s" Jul 6 23:59:12.546627 containerd[1511]: time="2025-07-06T23:59:12.546538642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:59:12.552090 containerd[1511]: time="2025-07-06T23:59:12.552044422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:59:12.554343 containerd[1511]: time="2025-07-06T23:59:12.554285565Z" level=info msg="CreateContainer within sandbox \"56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:59:12.577856 containerd[1511]: time="2025-07-06T23:59:12.577772505Z" level=info msg="CreateContainer within sandbox \"56ce3c76f832751cdb145f35ddcdb35514ed0d9acd0364a9df46025214207f00\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e2693c4bef4e0305b8ae13fe4ab7df1118fbaba6af6b817c6cc5244e1f7ecb08\"" Jul 6 23:59:12.579122 containerd[1511]: time="2025-07-06T23:59:12.579073881Z" level=info msg="StartContainer for \"e2693c4bef4e0305b8ae13fe4ab7df1118fbaba6af6b817c6cc5244e1f7ecb08\"" Jul 6 23:59:12.678547 systemd-networkd[1398]: cali0253c7332d0: Link UP Jul 6 23:59:12.680126 systemd-networkd[1398]: cali0253c7332d0: Gained carrier Jul 6 23:59:12.698663 systemd[1]: Started cri-containerd-e2693c4bef4e0305b8ae13fe4ab7df1118fbaba6af6b817c6cc5244e1f7ecb08.scope - libcontainer container e2693c4bef4e0305b8ae13fe4ab7df1118fbaba6af6b817c6cc5244e1f7ecb08. Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.532 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0 csi-node-driver- calico-system 0ca62d42-186f-4f68-9ed0-588257419b27 953 0 2025-07-06 23:58:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b csi-node-driver-2k7qf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0253c7332d0 [] [] }} ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.532 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.592 [INFO][4874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" HandleID="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.593 [INFO][4874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" HandleID="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"csi-node-driver-2k7qf", "timestamp":"2025-07-06 23:59:12.592868683 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.593 [INFO][4874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.593 [INFO][4874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.593 [INFO][4874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.604 [INFO][4874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.614 [INFO][4874] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.626 [INFO][4874] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.629 [INFO][4874] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.635 [INFO][4874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.636 [INFO][4874] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.637 [INFO][4874] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.644 [INFO][4874] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.658 [INFO][4874] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.198/26] block=192.168.14.192/26 handle="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.658 [INFO][4874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.198/26] handle="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.658 [INFO][4874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:12.710924 containerd[1511]: 2025-07-06 23:59:12.658 [INFO][4874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.198/26] IPv6=[] ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" HandleID="k8s-pod-network.aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.712464 containerd[1511]: 2025-07-06 23:59:12.664 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ca62d42-186f-4f68-9ed0-588257419b27", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"csi-node-driver-2k7qf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0253c7332d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:12.712464 containerd[1511]: 2025-07-06 23:59:12.665 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.198/32] ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.712464 containerd[1511]: 2025-07-06 23:59:12.665 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0253c7332d0 ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.712464 containerd[1511]: 2025-07-06 23:59:12.680 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.712464 containerd[1511]: 2025-07-06 23:59:12.681 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ca62d42-186f-4f68-9ed0-588257419b27", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab", Pod:"csi-node-driver-2k7qf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0253c7332d0", MAC:"da:4f:56:1b:c2:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:12.712464 containerd[1511]: 2025-07-06 23:59:12.702 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab" Namespace="calico-system" Pod="csi-node-driver-2k7qf" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:12.778679 systemd-networkd[1398]: cali9cb246ecca0: Link UP Jul 6 23:59:12.781533 systemd-networkd[1398]: cali9cb246ecca0: Gained carrier Jul 6 23:59:12.791533 containerd[1511]: time="2025-07-06T23:59:12.791341508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:12.792425 containerd[1511]: time="2025-07-06T23:59:12.792169994Z" level=info msg="StartContainer for \"e2693c4bef4e0305b8ae13fe4ab7df1118fbaba6af6b817c6cc5244e1f7ecb08\" returns successfully" Jul 6 23:59:12.793831 containerd[1511]: time="2025-07-06T23:59:12.793549427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:12.793831 containerd[1511]: time="2025-07-06T23:59:12.793612746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:12.794704 containerd[1511]: time="2025-07-06T23:59:12.794613348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.575 [INFO][4845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0 goldmane-768f4c5c69- calico-system 51e7e008-60ac-4100-81fe-67be6774ad5f 952 0 2025-07-06 23:58:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b goldmane-768f4c5c69-966nl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9cb246ecca0 [] [] }} ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.575 [INFO][4845] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.643 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" HandleID="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.643 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" HandleID="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"goldmane-768f4c5c69-966nl", "timestamp":"2025-07-06 23:59:12.64374068 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.643 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.659 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.659 [INFO][4883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.709 [INFO][4883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.719 [INFO][4883] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.727 [INFO][4883] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.729 [INFO][4883] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.732 [INFO][4883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.732 [INFO][4883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.735 [INFO][4883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.741 [INFO][4883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.758 [INFO][4883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.199/26] block=192.168.14.192/26 handle="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.758 [INFO][4883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.199/26] handle="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.758 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:12.802059 containerd[1511]: 2025-07-06 23:59:12.758 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.199/26] IPv6=[] ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" HandleID="k8s-pod-network.25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.802974 containerd[1511]: 2025-07-06 23:59:12.763 [INFO][4845] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"51e7e008-60ac-4100-81fe-67be6774ad5f", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"goldmane-768f4c5c69-966nl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cb246ecca0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:12.802974 containerd[1511]: 2025-07-06 23:59:12.763 [INFO][4845] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.199/32] ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.802974 containerd[1511]: 2025-07-06 23:59:12.763 [INFO][4845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cb246ecca0 ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.802974 containerd[1511]: 2025-07-06 23:59:12.782 [INFO][4845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.802974 containerd[1511]: 2025-07-06 23:59:12.783 [INFO][4845] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"51e7e008-60ac-4100-81fe-67be6774ad5f", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e", Pod:"goldmane-768f4c5c69-966nl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cb246ecca0", MAC:"e2:10:02:c8:a9:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:12.802974 containerd[1511]: 2025-07-06 23:59:12.799 [INFO][4845] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e" Namespace="calico-system" Pod="goldmane-768f4c5c69-966nl" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:12.825573 systemd[1]: Started cri-containerd-aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab.scope - libcontainer container aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab. Jul 6 23:59:12.836330 containerd[1511]: time="2025-07-06T23:59:12.836113940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:12.836330 containerd[1511]: time="2025-07-06T23:59:12.836163993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:12.836330 containerd[1511]: time="2025-07-06T23:59:12.836177038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:12.836330 containerd[1511]: time="2025-07-06T23:59:12.836256027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:12.860752 systemd[1]: Started cri-containerd-25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e.scope - libcontainer container 25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e. Jul 6 23:59:12.892009 systemd-networkd[1398]: calib4e200a1f5c: Link UP Jul 6 23:59:12.894814 systemd-networkd[1398]: calib4e200a1f5c: Gained carrier Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.586 [INFO][4861] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0 coredns-674b8bbfcf- kube-system db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7 954 0 2025-07-06 23:58:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-2-e8b158d58b coredns-674b8bbfcf-x79tg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4e200a1f5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.586 [INFO][4861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.648 [INFO][4890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" HandleID="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.648 [INFO][4890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" HandleID="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320da0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-2-e8b158d58b", "pod":"coredns-674b8bbfcf-x79tg", "timestamp":"2025-07-06 23:59:12.648636693 +0000 UTC"}, Hostname:"ci-4081-3-4-2-e8b158d58b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.648 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.758 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.758 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-2-e8b158d58b' Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.806 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.820 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.834 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.837 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.843 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.844 [INFO][4890] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.849 [INFO][4890] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8 Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.863 [INFO][4890] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.872 [INFO][4890] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.14.200/26] block=192.168.14.192/26 handle="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.872 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.200/26] handle="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" host="ci-4081-3-4-2-e8b158d58b" Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.872 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:12.915254 containerd[1511]: 2025-07-06 23:59:12.872 [INFO][4890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.200/26] IPv6=[] ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" HandleID="k8s-pod-network.7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.915773 containerd[1511]: 2025-07-06 23:59:12.878 [INFO][4861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"", Pod:"coredns-674b8bbfcf-x79tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e200a1f5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:12.915773 containerd[1511]: 2025-07-06 23:59:12.878 [INFO][4861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.200/32] ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.915773 containerd[1511]: 2025-07-06 23:59:12.880 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4e200a1f5c ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.915773 containerd[1511]: 2025-07-06 23:59:12.895 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.915773 containerd[1511]: 2025-07-06 23:59:12.899 [INFO][4861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8", Pod:"coredns-674b8bbfcf-x79tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e200a1f5c", MAC:"3a:9d:ae:d0:f7:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:12.915773 containerd[1511]: 2025-07-06 23:59:12.910 [INFO][4861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-x79tg" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:12.931602 containerd[1511]: time="2025-07-06T23:59:12.931560789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2k7qf,Uid:0ca62d42-186f-4f68-9ed0-588257419b27,Namespace:calico-system,Attempt:1,} returns sandbox id \"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab\"" Jul 6 23:59:12.943324 containerd[1511]: time="2025-07-06T23:59:12.943170402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:12.943324 containerd[1511]: time="2025-07-06T23:59:12.943268617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:12.943959 containerd[1511]: time="2025-07-06T23:59:12.943566678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:12.943959 containerd[1511]: time="2025-07-06T23:59:12.943898811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:12.962328 systemd[1]: Started cri-containerd-7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8.scope - libcontainer container 7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8. Jul 6 23:59:12.974275 containerd[1511]: time="2025-07-06T23:59:12.974153995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-966nl,Uid:51e7e008-60ac-4100-81fe-67be6774ad5f,Namespace:calico-system,Attempt:1,} returns sandbox id \"25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e\"" Jul 6 23:59:13.008338 containerd[1511]: time="2025-07-06T23:59:13.008293569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x79tg,Uid:db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7,Namespace:kube-system,Attempt:1,} returns sandbox id \"7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8\"" Jul 6 23:59:13.019263 containerd[1511]: time="2025-07-06T23:59:13.019212003Z" level=info msg="CreateContainer within sandbox \"7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:59:13.031548 containerd[1511]: time="2025-07-06T23:59:13.030300767Z" level=info msg="CreateContainer within sandbox \"7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c1608d91f6d6e8bd4676afce0e8920187fad2b194a77f785d8ac03dcc1fb1ff\"" Jul 6 23:59:13.033195 containerd[1511]: time="2025-07-06T23:59:13.032918638Z" level=info msg="StartContainer for \"9c1608d91f6d6e8bd4676afce0e8920187fad2b194a77f785d8ac03dcc1fb1ff\"" Jul 6 23:59:13.056550 systemd[1]: Started cri-containerd-9c1608d91f6d6e8bd4676afce0e8920187fad2b194a77f785d8ac03dcc1fb1ff.scope - libcontainer container 9c1608d91f6d6e8bd4676afce0e8920187fad2b194a77f785d8ac03dcc1fb1ff. Jul 6 23:59:13.078035 containerd[1511]: time="2025-07-06T23:59:13.077990410Z" level=info msg="StartContainer for \"9c1608d91f6d6e8bd4676afce0e8920187fad2b194a77f785d8ac03dcc1fb1ff\" returns successfully" Jul 6 23:59:13.140579 systemd-networkd[1398]: cali6e22392f6d3: Gained IPv6LL Jul 6 23:59:13.656303 kubelet[2716]: I0706 23:59:13.656228 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8f89ccf99-9gx9x" podStartSLOduration=3.9184953 podStartE2EDuration="8.656211454s" podCreationTimestamp="2025-07-06 23:59:05 +0000 UTC" firstStartedPulling="2025-07-06 23:59:07.811515701 +0000 UTC m=+40.644695786" lastFinishedPulling="2025-07-06 23:59:12.549231825 +0000 UTC m=+45.382411940" observedRunningTime="2025-07-06 23:59:13.640943134 +0000 UTC m=+46.474123228" watchObservedRunningTime="2025-07-06 23:59:13.656211454 +0000 UTC m=+46.489391529" Jul 6 23:59:14.101604 systemd-networkd[1398]: cali9cb246ecca0: Gained IPv6LL Jul 6 23:59:14.294127 systemd-networkd[1398]: cali0253c7332d0: Gained IPv6LL Jul 6 23:59:14.356608 systemd-networkd[1398]: calib4e200a1f5c: Gained IPv6LL Jul 6 23:59:14.655738 kubelet[2716]: I0706 23:59:14.655470 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x79tg" podStartSLOduration=40.655445215 podStartE2EDuration="40.655445215s" podCreationTimestamp="2025-07-06 23:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:59:13.657051103 +0000 UTC m=+46.490231178" watchObservedRunningTime="2025-07-06 23:59:14.655445215 +0000 UTC m=+47.488625350" Jul 6 23:59:15.861801 containerd[1511]: time="2025-07-06T23:59:15.861754015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:15.862750 containerd[1511]: time="2025-07-06T23:59:15.862704232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:59:15.863991 containerd[1511]: time="2025-07-06T23:59:15.863952908Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:15.865682 containerd[1511]: time="2025-07-06T23:59:15.865623168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:15.866208 containerd[1511]: time="2025-07-06T23:59:15.866052194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.313833854s" Jul 6 23:59:15.866208 containerd[1511]: time="2025-07-06T23:59:15.866079214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:59:15.867147 containerd[1511]: time="2025-07-06T23:59:15.867124370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:59:15.883084 containerd[1511]: time="2025-07-06T23:59:15.883034375Z" level=info msg="CreateContainer within sandbox \"20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:59:15.954591 containerd[1511]: time="2025-07-06T23:59:15.954549478Z" level=info msg="CreateContainer within sandbox \"20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a59a41375be4f653b3682752f83c835d029d061a0cc9f6508c99040392f40f49\"" Jul 6 23:59:15.956296 containerd[1511]: time="2025-07-06T23:59:15.956271074Z" level=info msg="StartContainer for \"a59a41375be4f653b3682752f83c835d029d061a0cc9f6508c99040392f40f49\"" Jul 6 23:59:15.991529 systemd[1]: Started cri-containerd-a59a41375be4f653b3682752f83c835d029d061a0cc9f6508c99040392f40f49.scope - libcontainer container a59a41375be4f653b3682752f83c835d029d061a0cc9f6508c99040392f40f49. Jul 6 23:59:16.028080 containerd[1511]: time="2025-07-06T23:59:16.027987926Z" level=info msg="StartContainer for \"a59a41375be4f653b3682752f83c835d029d061a0cc9f6508c99040392f40f49\" returns successfully" Jul 6 23:59:16.683050 kubelet[2716]: I0706 23:59:16.682987 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-985fd5996-856v9" podStartSLOduration=24.422872489 podStartE2EDuration="30.679984331s" podCreationTimestamp="2025-07-06 23:58:46 +0000 UTC" firstStartedPulling="2025-07-06 23:59:09.609859839 +0000 UTC m=+42.443039915" lastFinishedPulling="2025-07-06 23:59:15.866971682 +0000 UTC m=+48.700151757" observedRunningTime="2025-07-06 23:59:16.678168007 +0000 UTC m=+49.511348092" watchObservedRunningTime="2025-07-06 23:59:16.679984331 +0000 UTC m=+49.513164407" Jul 6 23:59:18.077756 containerd[1511]: time="2025-07-06T23:59:18.077707004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:18.078722 containerd[1511]: time="2025-07-06T23:59:18.078683199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:59:18.079716 containerd[1511]: time="2025-07-06T23:59:18.079667850Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:18.081531 containerd[1511]: time="2025-07-06T23:59:18.081467121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:18.082465 containerd[1511]: time="2025-07-06T23:59:18.081932096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.214780286s" Jul 6 23:59:18.082465 containerd[1511]: time="2025-07-06T23:59:18.081957654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:59:18.083025 containerd[1511]: time="2025-07-06T23:59:18.082865179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:59:18.086304 containerd[1511]: time="2025-07-06T23:59:18.086202803Z" level=info msg="CreateContainer within sandbox \"1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:59:18.128844 containerd[1511]: time="2025-07-06T23:59:18.128793423Z" level=info msg="CreateContainer within sandbox \"1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"85d3ee378eef0aa47c5fd8ed98eaf5b8002c641ab9b7367ae261cc1f63cb17e8\"" Jul 6 23:59:18.129308 containerd[1511]: time="2025-07-06T23:59:18.129228962Z" level=info msg="StartContainer for \"85d3ee378eef0aa47c5fd8ed98eaf5b8002c641ab9b7367ae261cc1f63cb17e8\"" Jul 6 23:59:18.178892 systemd[1]: Started cri-containerd-85d3ee378eef0aa47c5fd8ed98eaf5b8002c641ab9b7367ae261cc1f63cb17e8.scope - libcontainer container 85d3ee378eef0aa47c5fd8ed98eaf5b8002c641ab9b7367ae261cc1f63cb17e8. Jul 6 23:59:18.225106 containerd[1511]: time="2025-07-06T23:59:18.225037829Z" level=info msg="StartContainer for \"85d3ee378eef0aa47c5fd8ed98eaf5b8002c641ab9b7367ae261cc1f63cb17e8\" returns successfully" Jul 6 23:59:18.589868 containerd[1511]: time="2025-07-06T23:59:18.589823200Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:18.591299 containerd[1511]: time="2025-07-06T23:59:18.590839090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:59:18.592323 containerd[1511]: time="2025-07-06T23:59:18.592296660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 509.404549ms" Jul 6 23:59:18.592399 containerd[1511]: time="2025-07-06T23:59:18.592325083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:59:18.594167 containerd[1511]: time="2025-07-06T23:59:18.593873122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:59:18.596368 containerd[1511]: time="2025-07-06T23:59:18.596343266Z" level=info msg="CreateContainer within sandbox \"b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:59:18.608572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623177652.mount: Deactivated successfully. Jul 6 23:59:18.613794 containerd[1511]: time="2025-07-06T23:59:18.613763988Z" level=info msg="CreateContainer within sandbox \"b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9a3ed1a8708fffe5968bc99c02f5c9665dcc483bb71cea438e0c87bddfd8efdc\"" Jul 6 23:59:18.614833 containerd[1511]: time="2025-07-06T23:59:18.614668118Z" level=info msg="StartContainer for \"9a3ed1a8708fffe5968bc99c02f5c9665dcc483bb71cea438e0c87bddfd8efdc\"" Jul 6 23:59:18.640653 systemd[1]: Started cri-containerd-9a3ed1a8708fffe5968bc99c02f5c9665dcc483bb71cea438e0c87bddfd8efdc.scope - libcontainer container 9a3ed1a8708fffe5968bc99c02f5c9665dcc483bb71cea438e0c87bddfd8efdc. Jul 6 23:59:18.729759 containerd[1511]: time="2025-07-06T23:59:18.729722146Z" level=info msg="StartContainer for \"9a3ed1a8708fffe5968bc99c02f5c9665dcc483bb71cea438e0c87bddfd8efdc\" returns successfully" Jul 6 23:59:19.694568 kubelet[2716]: I0706 23:59:19.694037 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:59:19.706937 kubelet[2716]: I0706 23:59:19.706843 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-575b6688c4-pbb6c" podStartSLOduration=29.711067409 podStartE2EDuration="36.706826015s" podCreationTimestamp="2025-07-06 23:58:43 +0000 UTC" firstStartedPulling="2025-07-06 23:59:11.59748347 +0000 UTC m=+44.430663555" lastFinishedPulling="2025-07-06 23:59:18.593242086 +0000 UTC m=+51.426422161" observedRunningTime="2025-07-06 23:59:19.705335272 +0000 UTC m=+52.538515378" watchObservedRunningTime="2025-07-06 23:59:19.706826015 +0000 UTC m=+52.540006090" Jul 6 23:59:19.709258 kubelet[2716]: I0706 23:59:19.709183 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-575b6688c4-kx7lz" podStartSLOduration=29.280194446 podStartE2EDuration="36.709169398s" podCreationTimestamp="2025-07-06 23:58:43 +0000 UTC" firstStartedPulling="2025-07-06 23:59:10.653606714 +0000 UTC m=+43.486786789" lastFinishedPulling="2025-07-06 23:59:18.082581666 +0000 UTC m=+50.915761741" observedRunningTime="2025-07-06 23:59:18.695560248 +0000 UTC m=+51.528740333" watchObservedRunningTime="2025-07-06 23:59:19.709169398 +0000 UTC m=+52.542349494" Jul 6 23:59:20.175306 containerd[1511]: time="2025-07-06T23:59:20.175139129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:20.176139 containerd[1511]: time="2025-07-06T23:59:20.175987743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:59:20.177175 containerd[1511]: time="2025-07-06T23:59:20.176987433Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:20.178834 containerd[1511]: time="2025-07-06T23:59:20.178805780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:20.179239 containerd[1511]: time="2025-07-06T23:59:20.179207976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.585311109s" Jul 6 23:59:20.179239 containerd[1511]: time="2025-07-06T23:59:20.179234926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:59:20.180447 containerd[1511]: time="2025-07-06T23:59:20.180378555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:59:20.225834 containerd[1511]: time="2025-07-06T23:59:20.225777813Z" level=info msg="CreateContainer within sandbox \"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:59:20.265308 containerd[1511]: time="2025-07-06T23:59:20.265244379Z" level=info msg="CreateContainer within sandbox \"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"30ee9076a4909140b1f20f10b37d96ab728200789dd6875f875eeda0f4b486b9\"" Jul 6 23:59:20.265934 containerd[1511]: time="2025-07-06T23:59:20.265907375Z" level=info msg="StartContainer for \"30ee9076a4909140b1f20f10b37d96ab728200789dd6875f875eeda0f4b486b9\"" Jul 6 23:59:20.302151 systemd[1]: Started cri-containerd-30ee9076a4909140b1f20f10b37d96ab728200789dd6875f875eeda0f4b486b9.scope - libcontainer container 30ee9076a4909140b1f20f10b37d96ab728200789dd6875f875eeda0f4b486b9. Jul 6 23:59:20.334827 containerd[1511]: time="2025-07-06T23:59:20.334713946Z" level=info msg="StartContainer for \"30ee9076a4909140b1f20f10b37d96ab728200789dd6875f875eeda0f4b486b9\" returns successfully" Jul 6 23:59:23.664732 systemd[1]: Started sshd@7-157.180.92.196:22-194.0.234.93:17828.service - OpenSSH per-connection server daemon (194.0.234.93:17828). Jul 6 23:59:24.433350 sshd[5360]: Invalid user admin from 194.0.234.93 port 17828 Jul 6 23:59:24.490621 sshd[5360]: Connection closed by invalid user admin 194.0.234.93 port 17828 [preauth] Jul 6 23:59:24.492959 systemd[1]: sshd@7-157.180.92.196:22-194.0.234.93:17828.service: Deactivated successfully. Jul 6 23:59:24.837136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534414900.mount: Deactivated successfully. Jul 6 23:59:25.360468 containerd[1511]: time="2025-07-06T23:59:25.360432231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:25.395829 containerd[1511]: time="2025-07-06T23:59:25.361541705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:59:25.427729 containerd[1511]: time="2025-07-06T23:59:25.427673409Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:25.428545 containerd[1511]: time="2025-07-06T23:59:25.428402468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.247917902s" Jul 6 23:59:25.429271 containerd[1511]: time="2025-07-06T23:59:25.428838858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:25.432529 containerd[1511]: time="2025-07-06T23:59:25.432498737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:59:25.464891 containerd[1511]: time="2025-07-06T23:59:25.464719911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:59:25.566239 containerd[1511]: time="2025-07-06T23:59:25.566195497Z" level=info msg="CreateContainer within sandbox \"25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:59:25.630187 containerd[1511]: time="2025-07-06T23:59:25.630019322Z" level=info msg="CreateContainer within sandbox \"25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4\"" Jul 6 23:59:25.631365 containerd[1511]: time="2025-07-06T23:59:25.631346837Z" level=info msg="StartContainer for \"53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4\"" Jul 6 23:59:25.724608 systemd[1]: Started cri-containerd-53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4.scope - libcontainer container 53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4. Jul 6 23:59:25.795628 containerd[1511]: time="2025-07-06T23:59:25.795269892Z" level=info msg="StartContainer for \"53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4\" returns successfully" Jul 6 23:59:26.027742 kubelet[2716]: I0706 23:59:26.002392 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-966nl" podStartSLOduration=28.514527832 podStartE2EDuration="40.99068515s" podCreationTimestamp="2025-07-06 23:58:45 +0000 UTC" firstStartedPulling="2025-07-06 23:59:12.975634698 +0000 UTC m=+45.808814773" lastFinishedPulling="2025-07-06 23:59:25.451792016 +0000 UTC m=+58.284972091" observedRunningTime="2025-07-06 23:59:25.929347477 +0000 UTC m=+58.762527573" watchObservedRunningTime="2025-07-06 23:59:25.99068515 +0000 UTC m=+58.823865225" Jul 6 23:59:27.327096 containerd[1511]: time="2025-07-06T23:59:27.326716442Z" level=info msg="StopPodSandbox for \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\"" Jul 6 23:59:27.677331 containerd[1511]: time="2025-07-06T23:59:27.677268655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:27.678213 containerd[1511]: time="2025-07-06T23:59:27.678151163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:59:27.679444 containerd[1511]: time="2025-07-06T23:59:27.679357810Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:27.681312 containerd[1511]: time="2025-07-06T23:59:27.681293157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:27.681952 containerd[1511]: time="2025-07-06T23:59:27.681833453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.217078616s" Jul 6 23:59:27.681952 containerd[1511]: time="2025-07-06T23:59:27.681862708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:59:27.731372 containerd[1511]: time="2025-07-06T23:59:27.731331773Z" level=info msg="CreateContainer within sandbox \"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:59:27.766120 containerd[1511]: time="2025-07-06T23:59:27.766066629Z" level=info msg="CreateContainer within sandbox \"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5a63c18f7fd724263ab9ff3d505f4c420840a98661c55b39ac9cd0fe0eb385e9\"" Jul 6 23:59:27.770663 containerd[1511]: time="2025-07-06T23:59:27.770623012Z" level=info msg="StartContainer for \"5a63c18f7fd724263ab9ff3d505f4c420840a98661c55b39ac9cd0fe0eb385e9\"" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.589 [WARNING][5475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca3bb848-e578-4ffb-b363-75e2789c4189", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8", Pod:"calico-apiserver-575b6688c4-pbb6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e22392f6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.593 [INFO][5475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.593 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" iface="eth0" netns="" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.593 [INFO][5475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.593 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.763 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.766 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.767 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.784 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.784 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.791 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:27.804050 containerd[1511]: 2025-07-06 23:59:27.799 [INFO][5475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.804050 containerd[1511]: time="2025-07-06T23:59:27.803910058Z" level=info msg="TearDown network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\" successfully" Jul 6 23:59:27.804050 containerd[1511]: time="2025-07-06T23:59:27.803944933Z" level=info msg="StopPodSandbox for \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\" returns successfully" Jul 6 23:59:27.847669 systemd[1]: Started cri-containerd-5a63c18f7fd724263ab9ff3d505f4c420840a98661c55b39ac9cd0fe0eb385e9.scope - libcontainer container 5a63c18f7fd724263ab9ff3d505f4c420840a98661c55b39ac9cd0fe0eb385e9. Jul 6 23:59:27.904660 containerd[1511]: time="2025-07-06T23:59:27.904631300Z" level=info msg="StartContainer for \"5a63c18f7fd724263ab9ff3d505f4c420840a98661c55b39ac9cd0fe0eb385e9\" returns successfully" Jul 6 23:59:27.925590 containerd[1511]: time="2025-07-06T23:59:27.925512582Z" level=info msg="RemovePodSandbox for \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\"" Jul 6 23:59:27.930595 containerd[1511]: time="2025-07-06T23:59:27.929582921Z" level=info msg="Forcibly stopping sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\"" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.965 [WARNING][5541] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca3bb848-e578-4ffb-b363-75e2789c4189", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"b90aeec07da25c9dd6022d60687022a3a62f5f696b02582f3ed87b91f82429a8", Pod:"calico-apiserver-575b6688c4-pbb6c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e22392f6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.965 [INFO][5541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.965 [INFO][5541] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" iface="eth0" netns="" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.965 [INFO][5541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.965 [INFO][5541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.983 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.983 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.983 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.989 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.989 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" HandleID="k8s-pod-network.0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--pbb6c-eth0" Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.992 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:27.998481 containerd[1511]: 2025-07-06 23:59:27.995 [INFO][5541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d" Jul 6 23:59:27.998481 containerd[1511]: time="2025-07-06T23:59:27.998472473Z" level=info msg="TearDown network for sandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\" successfully" Jul 6 23:59:28.008566 containerd[1511]: time="2025-07-06T23:59:28.008512941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:28.031301 containerd[1511]: time="2025-07-06T23:59:28.031127029Z" level=info msg="RemovePodSandbox \"0bcb667e20a281eb338164bc228e08a923341ed8e29101ddf1c65528754fcc8d\" returns successfully" Jul 6 23:59:28.041111 containerd[1511]: time="2025-07-06T23:59:28.040859218Z" level=info msg="StopPodSandbox for \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\"" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.076 [WARNING][5562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3", Pod:"coredns-674b8bbfcf-szql8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali199bb77e4b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.076 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.076 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" iface="eth0" netns="" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.076 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.076 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.097 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.097 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.097 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.103 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.103 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.105 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.109246 containerd[1511]: 2025-07-06 23:59:28.107 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.111315 containerd[1511]: time="2025-07-06T23:59:28.109259659Z" level=info msg="TearDown network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\" successfully" Jul 6 23:59:28.111315 containerd[1511]: time="2025-07-06T23:59:28.109282652Z" level=info msg="StopPodSandbox for \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\" returns successfully" Jul 6 23:59:28.111315 containerd[1511]: time="2025-07-06T23:59:28.109976847Z" level=info msg="RemovePodSandbox for \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\"" Jul 6 23:59:28.111315 containerd[1511]: time="2025-07-06T23:59:28.109998308Z" level=info msg="Forcibly stopping sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\"" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.146 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f3c3c4c9-a36c-4c4b-a180-4d2623fe83c4", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"f37fe02b1c73402becff4952952c6592d5411905a0710f06358b7a912c8684b3", Pod:"coredns-674b8bbfcf-szql8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali199bb77e4b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.146 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.146 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" iface="eth0" netns="" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.146 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.146 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.163 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.163 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.163 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.172 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.172 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" HandleID="k8s-pod-network.f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--szql8-eth0" Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.174 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.177976 containerd[1511]: 2025-07-06 23:59:28.176 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a" Jul 6 23:59:28.178503 containerd[1511]: time="2025-07-06T23:59:28.178000941Z" level=info msg="TearDown network for sandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\" successfully" Jul 6 23:59:28.188565 containerd[1511]: time="2025-07-06T23:59:28.188459956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:28.188565 containerd[1511]: time="2025-07-06T23:59:28.188519248Z" level=info msg="RemovePodSandbox \"f2aaf65d7cb06526b0a7490f5332a72d128b5b28bb68ac8836f55f5f75860a2a\" returns successfully" Jul 6 23:59:28.189562 containerd[1511]: time="2025-07-06T23:59:28.188953553Z" level=info msg="StopPodSandbox for \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\"" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.219 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ca62d42-186f-4f68-9ed0-588257419b27", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab", Pod:"csi-node-driver-2k7qf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0253c7332d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.219 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.220 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" iface="eth0" netns="" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.220 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.220 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.242 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.242 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.242 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.248 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.248 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.250 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.254268 containerd[1511]: 2025-07-06 23:59:28.252 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.256770 containerd[1511]: time="2025-07-06T23:59:28.254323820Z" level=info msg="TearDown network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\" successfully" Jul 6 23:59:28.256770 containerd[1511]: time="2025-07-06T23:59:28.254367883Z" level=info msg="StopPodSandbox for \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\" returns successfully" Jul 6 23:59:28.256770 containerd[1511]: time="2025-07-06T23:59:28.254944597Z" level=info msg="RemovePodSandbox for \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\"" Jul 6 23:59:28.256770 containerd[1511]: time="2025-07-06T23:59:28.254967179Z" level=info msg="Forcibly stopping sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\"" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.288 [WARNING][5632] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ca62d42-186f-4f68-9ed0-588257419b27", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"aa351588eb30efa3929cb796fd446f86b7faf7907df95749a303ed8e0c334eab", Pod:"csi-node-driver-2k7qf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0253c7332d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.288 [INFO][5632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.288 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" iface="eth0" netns="" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.288 [INFO][5632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.289 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.306 [INFO][5639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.306 [INFO][5639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.306 [INFO][5639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.314 [WARNING][5639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.314 [INFO][5639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" HandleID="k8s-pod-network.0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Workload="ci--4081--3--4--2--e8b158d58b-k8s-csi--node--driver--2k7qf-eth0" Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.318 [INFO][5639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.322347 containerd[1511]: 2025-07-06 23:59:28.320 [INFO][5632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78" Jul 6 23:59:28.322898 containerd[1511]: time="2025-07-06T23:59:28.322400272Z" level=info msg="TearDown network for sandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\" successfully" Jul 6 23:59:28.325282 containerd[1511]: time="2025-07-06T23:59:28.325215292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:28.325282 containerd[1511]: time="2025-07-06T23:59:28.325269023Z" level=info msg="RemovePodSandbox \"0ef6e22953c9f1c0813ec51eaeff4af44943660c2778f932a273cf077c832e78\" returns successfully" Jul 6 23:59:28.325805 containerd[1511]: time="2025-07-06T23:59:28.325763613Z" level=info msg="StopPodSandbox for \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\"" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.354 [WARNING][5653] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f00b71d-ed32-42ca-b9e1-292600c3ce63", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc", Pod:"calico-apiserver-575b6688c4-kx7lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali205b4656c7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.354 [INFO][5653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.354 [INFO][5653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" iface="eth0" netns="" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.354 [INFO][5653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.354 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.372 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.372 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.372 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.377 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.378 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.379 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.383752 containerd[1511]: 2025-07-06 23:59:28.381 [INFO][5653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.383752 containerd[1511]: time="2025-07-06T23:59:28.383670865Z" level=info msg="TearDown network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\" successfully" Jul 6 23:59:28.383752 containerd[1511]: time="2025-07-06T23:59:28.383692416Z" level=info msg="StopPodSandbox for \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\" returns successfully" Jul 6 23:59:28.385391 containerd[1511]: time="2025-07-06T23:59:28.384084201Z" level=info msg="RemovePodSandbox for \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\"" Jul 6 23:59:28.385391 containerd[1511]: time="2025-07-06T23:59:28.384105682Z" level=info msg="Forcibly stopping sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\"" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.419 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0", GenerateName:"calico-apiserver-575b6688c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f00b71d-ed32-42ca-b9e1-292600c3ce63", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"575b6688c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"1441780d6ca5da36bab3facf2599617f313d7a6beb9e77d7c777ccc4897343cc", Pod:"calico-apiserver-575b6688c4-kx7lz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali205b4656c7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.420 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.420 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" iface="eth0" netns="" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.420 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.420 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.437 [INFO][5681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.437 [INFO][5681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.437 [INFO][5681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.442 [WARNING][5681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.443 [INFO][5681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" HandleID="k8s-pod-network.2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--apiserver--575b6688c4--kx7lz-eth0" Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.445 [INFO][5681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.449489 containerd[1511]: 2025-07-06 23:59:28.446 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af" Jul 6 23:59:28.449489 containerd[1511]: time="2025-07-06T23:59:28.449020643Z" level=info msg="TearDown network for sandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\" successfully" Jul 6 23:59:28.452654 containerd[1511]: time="2025-07-06T23:59:28.452613455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:28.463356 containerd[1511]: time="2025-07-06T23:59:28.463279118Z" level=info msg="RemovePodSandbox \"2562e67d7098a001d2285f040ac40f3522858e0637a36e262ec525c8b27804af\" returns successfully" Jul 6 23:59:28.464595 containerd[1511]: time="2025-07-06T23:59:28.464528726Z" level=info msg="StopPodSandbox for \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\"" Jul 6 23:59:28.483436 kubelet[2716]: I0706 23:59:28.478623 2716 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:59:28.483436 kubelet[2716]: I0706 23:59:28.482459 2716 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.505 [WARNING][5696] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8", Pod:"coredns-674b8bbfcf-x79tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e200a1f5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.506 [INFO][5696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.506 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" iface="eth0" netns="" Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.506 [INFO][5696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.506 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.533 [INFO][5704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.533 [INFO][5704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.533 [INFO][5704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.539 [WARNING][5704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.539 [INFO][5704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.541 [INFO][5704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.547133 containerd[1511]: 2025-07-06 23:59:28.544 [INFO][5696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.547822 containerd[1511]: time="2025-07-06T23:59:28.547188634Z" level=info msg="TearDown network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\" successfully" Jul 6 23:59:28.547937 containerd[1511]: time="2025-07-06T23:59:28.547813510Z" level=info msg="StopPodSandbox for \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\" returns successfully" Jul 6 23:59:28.548563 containerd[1511]: time="2025-07-06T23:59:28.548324259Z" level=info msg="RemovePodSandbox for \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\"" Jul 6 23:59:28.548563 containerd[1511]: time="2025-07-06T23:59:28.548349466Z" level=info msg="Forcibly stopping sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\"" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.580 [WARNING][5718] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db7c0d17-ff75-46b9-b7cd-e8f7f2291fe7", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"7329c43a4b894f8e70fc00c35da42ef2a0ac8bb71257cf203cb98b22bbe842c8", Pod:"coredns-674b8bbfcf-x79tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e200a1f5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.580 [INFO][5718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.580 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" iface="eth0" netns="" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.580 [INFO][5718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.581 [INFO][5718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.606 [INFO][5725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.607 [INFO][5725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.607 [INFO][5725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.615 [WARNING][5725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.615 [INFO][5725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" HandleID="k8s-pod-network.c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Workload="ci--4081--3--4--2--e8b158d58b-k8s-coredns--674b8bbfcf--x79tg-eth0" Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.618 [INFO][5725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.627426 containerd[1511]: 2025-07-06 23:59:28.623 [INFO][5718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb" Jul 6 23:59:28.629631 containerd[1511]: time="2025-07-06T23:59:28.627576563Z" level=info msg="TearDown network for sandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\" successfully" Jul 6 23:59:28.631991 containerd[1511]: time="2025-07-06T23:59:28.631919775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:28.631991 containerd[1511]: time="2025-07-06T23:59:28.631989026Z" level=info msg="RemovePodSandbox \"c5c7b36f5a0c56ea396568c9752bae309b43382217b11f2517e781e7a5a359fb\" returns successfully" Jul 6 23:59:28.632854 containerd[1511]: time="2025-07-06T23:59:28.632591497Z" level=info msg="StopPodSandbox for \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\"" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.665 [WARNING][5740] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"51e7e008-60ac-4100-81fe-67be6774ad5f", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e", Pod:"goldmane-768f4c5c69-966nl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cb246ecca0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.665 [INFO][5740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.665 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" iface="eth0" netns="" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.665 [INFO][5740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.665 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.683 [INFO][5747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.683 [INFO][5747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.683 [INFO][5747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.689 [WARNING][5747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.689 [INFO][5747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.691 [INFO][5747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.695854 containerd[1511]: 2025-07-06 23:59:28.693 [INFO][5740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.696490 containerd[1511]: time="2025-07-06T23:59:28.696286446Z" level=info msg="TearDown network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\" successfully" Jul 6 23:59:28.696490 containerd[1511]: time="2025-07-06T23:59:28.696339395Z" level=info msg="StopPodSandbox for \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\" returns successfully" Jul 6 23:59:28.696878 containerd[1511]: time="2025-07-06T23:59:28.696847329Z" level=info msg="RemovePodSandbox for \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\"" Jul 6 23:59:28.696878 containerd[1511]: time="2025-07-06T23:59:28.696877076Z" level=info msg="Forcibly stopping sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\"" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.728 [WARNING][5761] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"51e7e008-60ac-4100-81fe-67be6774ad5f", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"25828a18626a30e215f0fa02e3f2890ec2b3e26dd99f29a5fda24f0d2294941e", Pod:"goldmane-768f4c5c69-966nl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cb246ecca0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.728 [INFO][5761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.728 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" iface="eth0" netns="" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.728 [INFO][5761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.728 [INFO][5761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.746 [INFO][5768] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.746 [INFO][5768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.746 [INFO][5768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.752 [WARNING][5768] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.752 [INFO][5768] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" HandleID="k8s-pod-network.474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Workload="ci--4081--3--4--2--e8b158d58b-k8s-goldmane--768f4c5c69--966nl-eth0" Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.754 [INFO][5768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.758765 containerd[1511]: 2025-07-06 23:59:28.756 [INFO][5761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79" Jul 6 23:59:28.760560 containerd[1511]: time="2025-07-06T23:59:28.758765139Z" level=info msg="TearDown network for sandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\" successfully" Jul 6 23:59:28.761716 containerd[1511]: time="2025-07-06T23:59:28.761677261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:28.761774 containerd[1511]: time="2025-07-06T23:59:28.761750208Z" level=info msg="RemovePodSandbox \"474f8a13a5161f7b28a5db306c4df1e16772effc09ea93a0e6885a7af93cbd79\" returns successfully" Jul 6 23:59:28.762310 containerd[1511]: time="2025-07-06T23:59:28.762289582Z" level=info msg="StopPodSandbox for \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\"" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.797 [WARNING][5783] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.797 [INFO][5783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.797 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" iface="eth0" netns="" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.797 [INFO][5783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.797 [INFO][5783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.823 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.824 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.824 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.830 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.830 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.831 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.836334 containerd[1511]: 2025-07-06 23:59:28.833 [INFO][5783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.836743 containerd[1511]: time="2025-07-06T23:59:28.836378513Z" level=info msg="TearDown network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\" successfully" Jul 6 23:59:28.836743 containerd[1511]: time="2025-07-06T23:59:28.836433015Z" level=info msg="StopPodSandbox for \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\" returns successfully" Jul 6 23:59:28.837049 containerd[1511]: time="2025-07-06T23:59:28.837008498Z" level=info msg="RemovePodSandbox for \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\"" Jul 6 23:59:28.837090 containerd[1511]: time="2025-07-06T23:59:28.837048012Z" level=info msg="Forcibly stopping sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\"" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.877 [WARNING][5805] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" WorkloadEndpoint="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.878 [INFO][5805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.878 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" iface="eth0" netns="" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.878 [INFO][5805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.878 [INFO][5805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.902 [INFO][5812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.902 [INFO][5812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.902 [INFO][5812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.909 [WARNING][5812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.909 [INFO][5812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" HandleID="k8s-pod-network.a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Workload="ci--4081--3--4--2--e8b158d58b-k8s-whisker--7f5c8dd9b9--c5mwr-eth0" Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.911 [INFO][5812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:28.915795 containerd[1511]: 2025-07-06 23:59:28.913 [INFO][5805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0" Jul 6 23:59:28.916539 containerd[1511]: time="2025-07-06T23:59:28.915822427Z" level=info msg="TearDown network for sandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\" successfully" Jul 6 23:59:28.919627 containerd[1511]: time="2025-07-06T23:59:28.919573957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:28.919627 containerd[1511]: time="2025-07-06T23:59:28.919642918Z" level=info msg="RemovePodSandbox \"a809b693670cc050f46f95bc42b33c5c8f948dd65e47c258bac0f08a344bf2d0\" returns successfully" Jul 6 23:59:28.920335 containerd[1511]: time="2025-07-06T23:59:28.920018303Z" level=info msg="StopPodSandbox for \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\"" Jul 6 23:59:29.016202 kubelet[2716]: I0706 23:59:29.015930 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2k7qf" podStartSLOduration=28.243162468 podStartE2EDuration="43.015906659s" podCreationTimestamp="2025-07-06 23:58:46 +0000 UTC" firstStartedPulling="2025-07-06 23:59:12.93648763 +0000 UTC m=+45.769667706" lastFinishedPulling="2025-07-06 23:59:27.709231822 +0000 UTC m=+60.542411897" observedRunningTime="2025-07-06 23:59:28.99521212 +0000 UTC m=+61.828392205" watchObservedRunningTime="2025-07-06 23:59:29.015906659 +0000 UTC m=+61.849086735" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.970 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0", GenerateName:"calico-kube-controllers-985fd5996-", Namespace:"calico-system", SelfLink:"", UID:"3959008a-f171-4a04-98c6-424b260ddb24", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"985fd5996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01", Pod:"calico-kube-controllers-985fd5996-856v9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80538ed037e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.970 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.970 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" iface="eth0" netns="" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.970 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.970 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.997 [INFO][5833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.997 [INFO][5833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:28.997 [INFO][5833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:29.009 [WARNING][5833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:29.010 [INFO][5833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:29.012 [INFO][5833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:29.018266 containerd[1511]: 2025-07-06 23:59:29.014 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.020187 containerd[1511]: time="2025-07-06T23:59:29.018491548Z" level=info msg="TearDown network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\" successfully" Jul 6 23:59:29.020187 containerd[1511]: time="2025-07-06T23:59:29.018516704Z" level=info msg="StopPodSandbox for \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\" returns successfully" Jul 6 23:59:29.020187 containerd[1511]: time="2025-07-06T23:59:29.019087828Z" level=info msg="RemovePodSandbox for \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\"" Jul 6 23:59:29.020187 containerd[1511]: time="2025-07-06T23:59:29.019111192Z" level=info msg="Forcibly stopping sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\"" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.055 [WARNING][5847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0", GenerateName:"calico-kube-controllers-985fd5996-", Namespace:"calico-system", SelfLink:"", UID:"3959008a-f171-4a04-98c6-424b260ddb24", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"985fd5996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-2-e8b158d58b", ContainerID:"20c4c2ebe60922acc35629af9bebfa879501c48c9b2527840ef131aadb449b01", Pod:"calico-kube-controllers-985fd5996-856v9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80538ed037e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.055 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.055 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" iface="eth0" netns="" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.055 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.055 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.074 [INFO][5854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.074 [INFO][5854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.074 [INFO][5854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.081 [WARNING][5854] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.081 [INFO][5854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" HandleID="k8s-pod-network.aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Workload="ci--4081--3--4--2--e8b158d58b-k8s-calico--kube--controllers--985fd5996--856v9-eth0" Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.082 [INFO][5854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:29.087482 containerd[1511]: 2025-07-06 23:59:29.084 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2" Jul 6 23:59:29.087482 containerd[1511]: time="2025-07-06T23:59:29.086921131Z" level=info msg="TearDown network for sandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\" successfully" Jul 6 23:59:29.090632 containerd[1511]: time="2025-07-06T23:59:29.090580969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:29.090788 containerd[1511]: time="2025-07-06T23:59:29.090741711Z" level=info msg="RemovePodSandbox \"aab6ddbaee8f24308d9ddebd72dd20a7fc0e969f591d1b89e6c967abcbe863f2\" returns successfully" Jul 6 23:59:42.173503 kubelet[2716]: I0706 23:59:42.173340 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:59:56.888116 systemd[1]: run-containerd-runc-k8s.io-53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4-runc.HLPm8r.mount: Deactivated successfully. Jul 7 00:00:06.686756 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:00:06.694646 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:00:06.831789 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:00:06.831965 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:00:06.879283 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:00:16.721039 systemd[1]: run-containerd-runc-k8s.io-a59a41375be4f653b3682752f83c835d029d061a0cc9f6508c99040392f40f49-runc.qMqjv4.mount: Deactivated successfully. Jul 7 00:01:16.706669 systemd[1]: run-containerd-runc-k8s.io-a59a41375be4f653b3682752f83c835d029d061a0cc9f6508c99040392f40f49-runc.AA8iwg.mount: Deactivated successfully. Jul 7 00:01:36.591784 systemd[1]: run-containerd-runc-k8s.io-3b468afb21e8f37ef33c0ac533a5e06bb15952155800adbf2ade99bb96499a45-runc.auA0JG.mount: Deactivated successfully. Jul 7 00:02:17.816275 systemd[1]: run-containerd-runc-k8s.io-53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4-runc.2zHPyu.mount: Deactivated successfully. Jul 7 00:02:26.882808 systemd[1]: run-containerd-runc-k8s.io-53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4-runc.tzB86L.mount: Deactivated successfully. Jul 7 00:03:06.595671 systemd[1]: run-containerd-runc-k8s.io-3b468afb21e8f37ef33c0ac533a5e06bb15952155800adbf2ade99bb96499a45-runc.jj87Uy.mount: Deactivated successfully. Jul 7 00:03:22.771751 systemd[1]: Started sshd@8-157.180.92.196:22-147.75.109.163:54052.service - OpenSSH per-connection server daemon (147.75.109.163:54052). Jul 7 00:03:23.790692 sshd[6596]: Accepted publickey for core from 147.75.109.163 port 54052 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:23.793830 sshd[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:23.802984 systemd-logind[1486]: New session 8 of user core. Jul 7 00:03:23.807584 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:03:24.933786 sshd[6596]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:24.937756 systemd[1]: sshd@8-157.180.92.196:22-147.75.109.163:54052.service: Deactivated successfully. Jul 7 00:03:24.939990 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:03:24.941015 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:03:24.942810 systemd-logind[1486]: Removed session 8. Jul 7 00:03:30.109030 systemd[1]: Started sshd@9-157.180.92.196:22-147.75.109.163:38036.service - OpenSSH per-connection server daemon (147.75.109.163:38036). Jul 7 00:03:31.160141 sshd[6633]: Accepted publickey for core from 147.75.109.163 port 38036 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:31.162745 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:31.167117 systemd-logind[1486]: New session 9 of user core. Jul 7 00:03:31.171564 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:03:32.193283 sshd[6633]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:32.196654 systemd[1]: sshd@9-157.180.92.196:22-147.75.109.163:38036.service: Deactivated successfully. Jul 7 00:03:32.197042 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:03:32.198772 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:03:32.199681 systemd-logind[1486]: Removed session 9. Jul 7 00:03:32.366233 systemd[1]: Started sshd@10-157.180.92.196:22-147.75.109.163:38042.service - OpenSSH per-connection server daemon (147.75.109.163:38042). Jul 7 00:03:33.389260 sshd[6648]: Accepted publickey for core from 147.75.109.163 port 38042 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:33.390820 sshd[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:33.395648 systemd-logind[1486]: New session 10 of user core. Jul 7 00:03:33.404561 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:03:34.186341 sshd[6648]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:34.189665 systemd[1]: sshd@10-157.180.92.196:22-147.75.109.163:38042.service: Deactivated successfully. Jul 7 00:03:34.192118 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:03:34.193343 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:03:34.194613 systemd-logind[1486]: Removed session 10. Jul 7 00:03:34.360729 systemd[1]: Started sshd@11-157.180.92.196:22-147.75.109.163:38048.service - OpenSSH per-connection server daemon (147.75.109.163:38048). Jul 7 00:03:35.391007 sshd[6660]: Accepted publickey for core from 147.75.109.163 port 38048 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:35.392486 sshd[6660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:35.396703 systemd-logind[1486]: New session 11 of user core. Jul 7 00:03:35.402570 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:03:36.168545 sshd[6660]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:36.171987 systemd[1]: sshd@11-157.180.92.196:22-147.75.109.163:38048.service: Deactivated successfully. Jul 7 00:03:36.174070 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:03:36.175457 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:03:36.177214 systemd-logind[1486]: Removed session 11. Jul 7 00:03:41.343713 systemd[1]: Started sshd@12-157.180.92.196:22-147.75.109.163:58862.service - OpenSSH per-connection server daemon (147.75.109.163:58862). Jul 7 00:03:42.400934 sshd[6700]: Accepted publickey for core from 147.75.109.163 port 58862 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:42.403995 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:42.410919 systemd-logind[1486]: New session 12 of user core. Jul 7 00:03:42.416566 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:03:43.190655 sshd[6700]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:43.194647 systemd[1]: sshd@12-157.180.92.196:22-147.75.109.163:58862.service: Deactivated successfully. Jul 7 00:03:43.196272 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:03:43.197018 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:03:43.197997 systemd-logind[1486]: Removed session 12. Jul 7 00:03:48.369524 systemd[1]: Started sshd@13-157.180.92.196:22-147.75.109.163:50720.service - OpenSSH per-connection server daemon (147.75.109.163:50720). Jul 7 00:03:49.386175 sshd[6753]: Accepted publickey for core from 147.75.109.163 port 50720 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:49.387962 sshd[6753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:49.392351 systemd-logind[1486]: New session 13 of user core. Jul 7 00:03:49.397557 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:03:50.156554 sshd[6753]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:50.161203 systemd[1]: sshd@13-157.180.92.196:22-147.75.109.163:50720.service: Deactivated successfully. Jul 7 00:03:50.163718 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:03:50.164980 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:03:50.167118 systemd-logind[1486]: Removed session 13. Jul 7 00:03:55.332403 systemd[1]: Started sshd@14-157.180.92.196:22-147.75.109.163:50724.service - OpenSSH per-connection server daemon (147.75.109.163:50724). Jul 7 00:03:56.337372 sshd[6766]: Accepted publickey for core from 147.75.109.163 port 50724 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:56.339594 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:56.345769 systemd-logind[1486]: New session 14 of user core. Jul 7 00:03:56.352568 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:03:56.904795 systemd[1]: run-containerd-runc-k8s.io-53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4-runc.M8kKSY.mount: Deactivated successfully. Jul 7 00:03:57.170510 sshd[6766]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:57.177179 systemd[1]: sshd@14-157.180.92.196:22-147.75.109.163:50724.service: Deactivated successfully. Jul 7 00:03:57.179188 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:03:57.180943 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:03:57.182714 systemd-logind[1486]: Removed session 14. Jul 7 00:03:57.347681 systemd[1]: Started sshd@15-157.180.92.196:22-147.75.109.163:51386.service - OpenSSH per-connection server daemon (147.75.109.163:51386). Jul 7 00:03:58.378150 sshd[6801]: Accepted publickey for core from 147.75.109.163 port 51386 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:03:58.379670 sshd[6801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:03:58.384835 systemd-logind[1486]: New session 15 of user core. Jul 7 00:03:58.389567 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:03:59.316664 sshd[6801]: pam_unix(sshd:session): session closed for user core Jul 7 00:03:59.320925 systemd[1]: sshd@15-157.180.92.196:22-147.75.109.163:51386.service: Deactivated successfully. Jul 7 00:03:59.322858 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:03:59.326832 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:03:59.327916 systemd-logind[1486]: Removed session 15. Jul 7 00:03:59.486239 systemd[1]: Started sshd@16-157.180.92.196:22-147.75.109.163:51394.service - OpenSSH per-connection server daemon (147.75.109.163:51394). Jul 7 00:04:00.518499 sshd[6812]: Accepted publickey for core from 147.75.109.163 port 51394 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:04:00.520001 sshd[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:00.524968 systemd-logind[1486]: New session 16 of user core. Jul 7 00:04:00.528576 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:04:02.138844 sshd[6812]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:02.146041 systemd[1]: sshd@16-157.180.92.196:22-147.75.109.163:51394.service: Deactivated successfully. Jul 7 00:04:02.147912 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:04:02.148934 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:04:02.152901 systemd-logind[1486]: Removed session 16. Jul 7 00:04:02.307490 systemd[1]: Started sshd@17-157.180.92.196:22-147.75.109.163:51402.service - OpenSSH per-connection server daemon (147.75.109.163:51402). Jul 7 00:04:03.357337 sshd[6833]: Accepted publickey for core from 147.75.109.163 port 51402 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:04:03.361488 sshd[6833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:03.370013 systemd-logind[1486]: New session 17 of user core. Jul 7 00:04:03.375625 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:04:04.611055 sshd[6833]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:04.616322 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:04:04.616680 systemd[1]: sshd@17-157.180.92.196:22-147.75.109.163:51402.service: Deactivated successfully. Jul 7 00:04:04.617913 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:04:04.619493 systemd-logind[1486]: Removed session 17. Jul 7 00:04:04.790573 systemd[1]: Started sshd@18-157.180.92.196:22-147.75.109.163:51406.service - OpenSSH per-connection server daemon (147.75.109.163:51406). Jul 7 00:04:05.825575 sshd[6844]: Accepted publickey for core from 147.75.109.163 port 51406 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:04:05.827127 sshd[6844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:05.831503 systemd-logind[1486]: New session 18 of user core. Jul 7 00:04:05.836552 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:04:06.705791 sshd[6844]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:06.709607 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:04:06.711368 systemd[1]: sshd@18-157.180.92.196:22-147.75.109.163:51406.service: Deactivated successfully. Jul 7 00:04:06.713321 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:04:06.714389 systemd-logind[1486]: Removed session 18. Jul 7 00:04:11.873001 systemd[1]: Started sshd@19-157.180.92.196:22-147.75.109.163:50812.service - OpenSSH per-connection server daemon (147.75.109.163:50812). Jul 7 00:04:12.921556 sshd[6881]: Accepted publickey for core from 147.75.109.163 port 50812 ssh2: RSA SHA256:WO1o7mVFDf5n+bNY0zV07pWGN617llOWY24GfZ+AEOU Jul 7 00:04:12.924068 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:04:12.928268 systemd-logind[1486]: New session 19 of user core. Jul 7 00:04:12.934591 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:04:14.155238 sshd[6881]: pam_unix(sshd:session): session closed for user core Jul 7 00:04:14.160441 systemd[1]: sshd@19-157.180.92.196:22-147.75.109.163:50812.service: Deactivated successfully. Jul 7 00:04:14.162773 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:04:14.163740 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:04:14.165260 systemd-logind[1486]: Removed session 19. Jul 7 00:04:26.880739 systemd[1]: run-containerd-runc-k8s.io-53da1a8abae424ddcf2db17dfe13d8ebf462b94c60021ceeb831269bcba33cb4-runc.O8hELN.mount: Deactivated successfully. Jul 7 00:04:33.027538 systemd[1]: cri-containerd-93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8.scope: Deactivated successfully. Jul 7 00:04:33.028544 systemd[1]: cri-containerd-93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8.scope: Consumed 3.047s CPU time, 21.7M memory peak, 0B memory swap peak. Jul 7 00:04:33.068469 kubelet[2716]: E0707 00:04:33.067771 2716 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48832->10.0.0.2:2379: read: connection timed out" Jul 7 00:04:33.194095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8-rootfs.mount: Deactivated successfully. Jul 7 00:04:33.260434 containerd[1511]: time="2025-07-07T00:04:33.217845890Z" level=info msg="shim disconnected" id=93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8 namespace=k8s.io Jul 7 00:04:33.262050 containerd[1511]: time="2025-07-07T00:04:33.260446268Z" level=warning msg="cleaning up after shim disconnected" id=93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8 namespace=k8s.io Jul 7 00:04:33.262050 containerd[1511]: time="2025-07-07T00:04:33.260471706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:04:33.635062 systemd[1]: cri-containerd-7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d.scope: Deactivated successfully. Jul 7 00:04:33.635703 systemd[1]: cri-containerd-7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d.scope: Consumed 5.154s CPU time, 28.3M memory peak, 0B memory swap peak. Jul 7 00:04:33.666157 containerd[1511]: time="2025-07-07T00:04:33.666094805Z" level=info msg="shim disconnected" id=7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d namespace=k8s.io Jul 7 00:04:33.666362 containerd[1511]: time="2025-07-07T00:04:33.666337654Z" level=warning msg="cleaning up after shim disconnected" id=7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d namespace=k8s.io Jul 7 00:04:33.666469 containerd[1511]: time="2025-07-07T00:04:33.666454615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:04:33.667566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d-rootfs.mount: Deactivated successfully. Jul 7 00:04:33.670632 systemd[1]: cri-containerd-807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f.scope: Deactivated successfully. Jul 7 00:04:33.670999 systemd[1]: cri-containerd-807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f.scope: Consumed 14.646s CPU time. Jul 7 00:04:33.696384 containerd[1511]: time="2025-07-07T00:04:33.695905601Z" level=info msg="shim disconnected" id=807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f namespace=k8s.io Jul 7 00:04:33.696384 containerd[1511]: time="2025-07-07T00:04:33.695977265Z" level=warning msg="cleaning up after shim disconnected" id=807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f namespace=k8s.io Jul 7 00:04:33.696384 containerd[1511]: time="2025-07-07T00:04:33.695986163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:04:33.698676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f-rootfs.mount: Deactivated successfully. Jul 7 00:04:33.712613 containerd[1511]: time="2025-07-07T00:04:33.712561935Z" level=warning msg="cleanup warnings time=\"2025-07-07T00:04:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 00:04:33.854089 kubelet[2716]: I0707 00:04:33.854000 2716 scope.go:117] "RemoveContainer" containerID="7671eebbbe40551e0c042e2401f90b63051ba3213aa1e07edd3271419118fd1d" Jul 7 00:04:33.857594 kubelet[2716]: I0707 00:04:33.857447 2716 scope.go:117] "RemoveContainer" containerID="807463ca9527c18cd1ebb84d1b794efc0bfffd6fc9fb3023dc0f02f7d212f50f" Jul 7 00:04:33.860575 kubelet[2716]: I0707 00:04:33.860546 2716 scope.go:117] "RemoveContainer" containerID="93fe77df641d3d08a8fc44fc54f72515fa9cfaf7510762155779239111e95db8" Jul 7 00:04:33.933455 containerd[1511]: time="2025-07-07T00:04:33.933191789Z" level=info msg="CreateContainer within sandbox \"f861ed991744b2690421da16fa6162d69026fc2e492adf0f1afb3ff75a17e2bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:04:33.933455 containerd[1511]: time="2025-07-07T00:04:33.933299142Z" level=info msg="CreateContainer within sandbox \"017bdba14c415f49bc9d62d7629c0f0dd5fc6b6d2a6c32c48b75d01438da515f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:04:33.935445 containerd[1511]: time="2025-07-07T00:04:33.935325463Z" level=info msg="CreateContainer within sandbox \"7401659e7cb15295cee7f36f4ce23e69992b362289219959bf619e8d4e005fde\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 00:04:34.023012 containerd[1511]: time="2025-07-07T00:04:34.022692352Z" level=info msg="CreateContainer within sandbox \"017bdba14c415f49bc9d62d7629c0f0dd5fc6b6d2a6c32c48b75d01438da515f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"25d101f1105d5b7b0a834b28842ba4b6c799b771a44e19108548f897f372d9a2\"" Jul 7 00:04:34.023012 containerd[1511]: time="2025-07-07T00:04:34.022924240Z" level=info msg="CreateContainer within sandbox \"f861ed991744b2690421da16fa6162d69026fc2e492adf0f1afb3ff75a17e2bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1e34dc3cf4e4bd68b4dace2ad068707510b914c1d0986c2c1e5b36022a4f3d55\"" Jul 7 00:04:34.023330 containerd[1511]: time="2025-07-07T00:04:34.023280060Z" level=info msg="CreateContainer within sandbox \"7401659e7cb15295cee7f36f4ce23e69992b362289219959bf619e8d4e005fde\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e597448efab4cd75381d8980c259e9f2f1cd2b137900b081dc715708ec12fdc3\"" Jul 7 00:04:34.026628 containerd[1511]: time="2025-07-07T00:04:34.026605871Z" level=info msg="StartContainer for \"e597448efab4cd75381d8980c259e9f2f1cd2b137900b081dc715708ec12fdc3\"" Jul 7 00:04:34.026788 containerd[1511]: time="2025-07-07T00:04:34.026773608Z" level=info msg="StartContainer for \"25d101f1105d5b7b0a834b28842ba4b6c799b771a44e19108548f897f372d9a2\"" Jul 7 00:04:34.028419 containerd[1511]: time="2025-07-07T00:04:34.026607725Z" level=info msg="StartContainer for \"1e34dc3cf4e4bd68b4dace2ad068707510b914c1d0986c2c1e5b36022a4f3d55\"" Jul 7 00:04:34.062600 systemd[1]: Started cri-containerd-1e34dc3cf4e4bd68b4dace2ad068707510b914c1d0986c2c1e5b36022a4f3d55.scope - libcontainer container 1e34dc3cf4e4bd68b4dace2ad068707510b914c1d0986c2c1e5b36022a4f3d55. Jul 7 00:04:34.069520 systemd[1]: Started cri-containerd-e597448efab4cd75381d8980c259e9f2f1cd2b137900b081dc715708ec12fdc3.scope - libcontainer container e597448efab4cd75381d8980c259e9f2f1cd2b137900b081dc715708ec12fdc3. Jul 7 00:04:34.087588 systemd[1]: Started cri-containerd-25d101f1105d5b7b0a834b28842ba4b6c799b771a44e19108548f897f372d9a2.scope - libcontainer container 25d101f1105d5b7b0a834b28842ba4b6c799b771a44e19108548f897f372d9a2. Jul 7 00:04:34.125881 containerd[1511]: time="2025-07-07T00:04:34.125638212Z" level=info msg="StartContainer for \"e597448efab4cd75381d8980c259e9f2f1cd2b137900b081dc715708ec12fdc3\" returns successfully" Jul 7 00:04:34.139173 containerd[1511]: time="2025-07-07T00:04:34.138789778Z" level=info msg="StartContainer for \"1e34dc3cf4e4bd68b4dace2ad068707510b914c1d0986c2c1e5b36022a4f3d55\" returns successfully" Jul 7 00:04:34.155873 containerd[1511]: time="2025-07-07T00:04:34.155464957Z" level=info msg="StartContainer for \"25d101f1105d5b7b0a834b28842ba4b6c799b771a44e19108548f897f372d9a2\" returns successfully" Jul 7 00:04:34.195346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529877123.mount: Deactivated successfully. Jul 7 00:04:37.829968 kubelet[2716]: E0707 00:04:37.816201 2716 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48624->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-2-e8b158d58b.184fcf43238566ae kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-2-e8b158d58b,UID:629dcf05c4e3e02db23291e7e5156339,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-2-e8b158d58b,},FirstTimestamp:2025-07-07 00:04:27.336115886 +0000 UTC m=+360.169295971,LastTimestamp:2025-07-07 00:04:27.336115886 +0000 UTC m=+360.169295971,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-2-e8b158d58b,}"